Sep 12 17:11:45.236626 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 12 17:11:45.236672 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 15:59:19 -00 2025 Sep 12 17:11:45.236697 kernel: KASLR disabled due to lack of seed Sep 12 17:11:45.236713 kernel: efi: EFI v2.7 by EDK II Sep 12 17:11:45.236729 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Sep 12 17:11:45.236744 kernel: ACPI: Early table checksum verification disabled Sep 12 17:11:45.236762 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 12 17:11:45.236777 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 17:11:45.236793 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 17:11:45.236808 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 12 17:11:45.236828 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 17:11:45.236844 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 12 17:11:45.236860 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 12 17:11:45.236876 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 12 17:11:45.236894 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 17:11:45.236915 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 12 17:11:45.236932 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 12 17:11:45.236949 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 12 17:11:45.236965 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 12 17:11:45.236981 kernel: printk: bootconsole [uart0] enabled Sep 12 17:11:45.236998 kernel: NUMA: Failed to initialise from firmware Sep 12 17:11:45.237014 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 17:11:45.237031 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 12 17:11:45.237047 kernel: Zone ranges: Sep 12 17:11:45.237064 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 12 17:11:45.237080 kernel: DMA32 empty Sep 12 17:11:45.237101 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 12 17:11:45.237117 kernel: Movable zone start for each node Sep 12 17:11:45.237133 kernel: Early memory node ranges Sep 12 17:11:45.237149 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 12 17:11:45.237166 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 12 17:11:45.237182 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 12 17:11:45.237198 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 12 17:11:45.237215 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 12 17:11:45.237231 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 12 17:11:45.237247 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 12 17:11:45.237263 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 12 17:11:45.237279 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 17:11:45.237300 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 12 17:11:45.237318 kernel: psci: probing for conduit method from ACPI. Sep 12 17:11:45.237341 kernel: psci: PSCIv1.0 detected in firmware. Sep 12 17:11:45.237359 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:11:45.237377 kernel: psci: Trusted OS migration not required Sep 12 17:11:45.237398 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:11:45.237416 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 12 17:11:45.237434 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 17:11:45.237451 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 17:11:45.239517 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 17:11:45.239537 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:11:45.239555 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:11:45.239573 kernel: CPU features: detected: Spectre-v2 Sep 12 17:11:45.239591 kernel: CPU features: detected: Spectre-v3a Sep 12 17:11:45.239608 kernel: CPU features: detected: Spectre-BHB Sep 12 17:11:45.239626 kernel: CPU features: detected: ARM erratum 1742098 Sep 12 17:11:45.239652 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 12 17:11:45.239670 kernel: alternatives: applying boot alternatives Sep 12 17:11:45.239690 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:11:45.239709 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:11:45.239726 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:11:45.239744 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:11:45.239761 kernel: Fallback order for Node 0: 0 Sep 12 17:11:45.239779 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 12 17:11:45.239796 kernel: Policy zone: Normal Sep 12 17:11:45.239813 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:11:45.239830 kernel: software IO TLB: area num 2. Sep 12 17:11:45.239853 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 12 17:11:45.239871 kernel: Memory: 3820024K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 210440K reserved, 0K cma-reserved) Sep 12 17:11:45.239889 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:11:45.239907 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:11:45.239925 kernel: rcu: RCU event tracing is enabled. Sep 12 17:11:45.239943 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:11:45.239961 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:11:45.239979 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:11:45.239996 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:11:45.240014 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:11:45.240031 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:11:45.240053 kernel: GICv3: 96 SPIs implemented Sep 12 17:11:45.240071 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:11:45.240088 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:11:45.242150 kernel: GICv3: GICv3 features: 16 PPIs Sep 12 17:11:45.242510 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 12 17:11:45.242860 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 12 17:11:45.242984 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:11:45.243512 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:11:45.243536 kernel: GICv3: using LPI property table @0x00000004000d0000 Sep 12 17:11:45.243554 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 12 17:11:45.243572 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Sep 12 17:11:45.243590 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:11:45.243620 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 12 17:11:45.243638 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 12 17:11:45.243657 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 12 17:11:45.243675 kernel: Console: colour dummy device 80x25 Sep 12 17:11:45.243693 kernel: printk: console [tty1] enabled Sep 12 17:11:45.243711 kernel: ACPI: Core revision 20230628 Sep 12 17:11:45.243729 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 12 17:11:45.243747 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:11:45.243765 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:11:45.243789 kernel: landlock: Up and running. Sep 12 17:11:45.243807 kernel: SELinux: Initializing. Sep 12 17:11:45.243825 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:11:45.243844 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:11:45.243863 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:11:45.243881 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:11:45.243899 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:11:45.243918 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:11:45.243936 kernel: Platform MSI: ITS@0x10080000 domain created Sep 12 17:11:45.243959 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 12 17:11:45.243978 kernel: Remapping and enabling EFI services. Sep 12 17:11:45.243995 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:11:45.244013 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:11:45.244031 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 12 17:11:45.244049 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Sep 12 17:11:45.244066 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 12 17:11:45.244084 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:11:45.244101 kernel: SMP: Total of 2 processors activated. Sep 12 17:11:45.244119 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:11:45.244142 kernel: CPU features: detected: 32-bit EL1 Support Sep 12 17:11:45.244160 kernel: CPU features: detected: CRC32 instructions Sep 12 17:11:45.244190 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:11:45.244213 kernel: alternatives: applying system-wide alternatives Sep 12 17:11:45.244232 kernel: devtmpfs: initialized Sep 12 17:11:45.244250 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:11:45.244269 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:11:45.244288 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:11:45.244307 kernel: SMBIOS 3.0.0 present. Sep 12 17:11:45.244330 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 12 17:11:45.244349 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:11:45.244368 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:11:45.244386 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:11:45.244405 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:11:45.244424 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:11:45.244443 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Sep 12 17:11:45.245514 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:11:45.245544 kernel: cpuidle: using governor menu Sep 12 17:11:45.245563 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:11:45.245583 kernel: ASID allocator initialised with 65536 entries Sep 12 17:11:45.245602 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:11:45.245620 kernel: Serial: AMBA PL011 UART driver Sep 12 17:11:45.245639 kernel: Modules: 17472 pages in range for non-PLT usage Sep 12 17:11:45.245658 kernel: Modules: 508992 pages in range for PLT usage Sep 12 17:11:45.245677 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:11:45.245704 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:11:45.245723 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:11:45.245742 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:11:45.245760 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:11:45.245779 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:11:45.245798 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:11:45.245816 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:11:45.245835 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:11:45.245853 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:11:45.245876 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:11:45.245895 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:11:45.245913 kernel: ACPI: Interpreter enabled Sep 12 17:11:45.245932 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:11:45.245973 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:11:45.245993 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 12 17:11:45.247621 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:11:45.247865 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:11:45.248080 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:11:45.248287 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 12 17:11:45.248518 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 12 17:11:45.248547 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 12 17:11:45.248567 kernel: acpiphp: Slot [1] registered Sep 12 17:11:45.248586 kernel: acpiphp: Slot [2] registered Sep 12 17:11:45.248604 kernel: acpiphp: Slot [3] registered Sep 12 17:11:45.248623 kernel: acpiphp: Slot [4] registered Sep 12 17:11:45.248648 kernel: acpiphp: Slot [5] registered Sep 12 17:11:45.248667 kernel: acpiphp: Slot [6] registered Sep 12 17:11:45.248686 kernel: acpiphp: Slot [7] registered Sep 12 17:11:45.248704 kernel: acpiphp: Slot [8] registered Sep 12 17:11:45.248722 kernel: acpiphp: Slot [9] registered Sep 12 17:11:45.248740 kernel: acpiphp: Slot [10] registered Sep 12 17:11:45.248759 kernel: acpiphp: Slot [11] registered Sep 12 17:11:45.248777 kernel: acpiphp: Slot [12] registered Sep 12 17:11:45.248796 kernel: acpiphp: Slot [13] registered Sep 12 17:11:45.248814 kernel: acpiphp: Slot [14] registered Sep 12 17:11:45.248838 kernel: acpiphp: Slot [15] registered Sep 12 17:11:45.248856 kernel: acpiphp: Slot [16] registered Sep 12 17:11:45.248875 kernel: acpiphp: Slot [17] registered Sep 12 17:11:45.248893 kernel: acpiphp: Slot [18] registered Sep 12 17:11:45.248911 kernel: acpiphp: Slot [19] registered Sep 12 17:11:45.248929 kernel: acpiphp: Slot [20] registered Sep 12 17:11:45.248948 kernel: acpiphp: Slot [21] registered Sep 12 17:11:45.248966 kernel: acpiphp: Slot [22] registered Sep 12 17:11:45.248984 kernel: acpiphp: Slot [23] registered Sep 12 17:11:45.249007 kernel: acpiphp: Slot [24] registered Sep 12 17:11:45.249026 kernel: acpiphp: Slot [25] registered Sep 12 17:11:45.249045 kernel: acpiphp: Slot [26] registered Sep 12 17:11:45.249063 kernel: acpiphp: Slot [27] registered Sep 12 17:11:45.249081 kernel: acpiphp: Slot [28] registered Sep 12 17:11:45.249100 kernel: acpiphp: Slot [29] registered Sep 12 17:11:45.249118 kernel: acpiphp: Slot [30] registered Sep 12 17:11:45.249136 kernel: acpiphp: Slot [31] registered Sep 12 17:11:45.249154 kernel: PCI host bridge to bus 0000:00 Sep 12 17:11:45.249363 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 12 17:11:45.250685 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:11:45.250899 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 12 17:11:45.251087 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 12 17:11:45.251328 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 12 17:11:45.253745 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 12 17:11:45.254011 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 12 17:11:45.254248 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 12 17:11:45.256503 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 12 17:11:45.256809 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:11:45.257035 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 12 17:11:45.257244 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 12 17:11:45.257451 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 12 17:11:45.257695 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 12 17:11:45.257903 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:11:45.258146 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 12 17:11:45.258358 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 12 17:11:45.261212 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 12 17:11:45.261478 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 12 17:11:45.261706 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 12 17:11:45.261903 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 12 17:11:45.262123 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:11:45.262313 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 12 17:11:45.262339 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:11:45.262359 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:11:45.262379 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:11:45.262397 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:11:45.262416 kernel: iommu: Default domain type: Translated Sep 12 17:11:45.262434 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:11:45.266517 kernel: efivars: Registered efivars operations Sep 12 17:11:45.266558 kernel: vgaarb: loaded Sep 12 17:11:45.266578 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:11:45.266598 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:11:45.266617 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:11:45.266635 kernel: pnp: PnP ACPI init Sep 12 17:11:45.266915 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 12 17:11:45.266944 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:11:45.266974 kernel: NET: Registered PF_INET protocol family Sep 12 17:11:45.266994 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:11:45.267013 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:11:45.267032 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:11:45.267051 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:11:45.267069 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:11:45.267088 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:11:45.267107 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:11:45.267125 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:11:45.267148 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:11:45.267167 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:11:45.267185 kernel: kvm [1]: HYP mode not available Sep 12 17:11:45.267204 kernel: Initialise system trusted keyrings Sep 12 17:11:45.267222 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:11:45.267241 kernel: Key type asymmetric registered Sep 12 17:11:45.267259 kernel: Asymmetric key parser 'x509' registered Sep 12 17:11:45.267277 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:11:45.267296 kernel: io scheduler mq-deadline registered Sep 12 17:11:45.267319 kernel: io scheduler kyber registered Sep 12 17:11:45.267337 kernel: io scheduler bfq registered Sep 12 17:11:45.267591 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 12 17:11:45.267621 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:11:45.267641 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:11:45.267660 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 12 17:11:45.267678 kernel: ACPI: button: Sleep Button [SLPB] Sep 12 17:11:45.267697 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:11:45.267723 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 12 17:11:45.267938 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 12 17:11:45.268055 kernel: printk: console [ttyS0] disabled Sep 12 17:11:45.268415 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 12 17:11:45.270582 kernel: printk: console [ttyS0] enabled Sep 12 17:11:45.270615 kernel: printk: bootconsole [uart0] disabled Sep 12 17:11:45.270635 kernel: thunder_xcv, ver 1.0 Sep 12 17:11:45.270655 kernel: thunder_bgx, ver 1.0 Sep 12 17:11:45.270674 kernel: nicpf, ver 1.0 Sep 12 17:11:45.270704 kernel: nicvf, ver 1.0 Sep 12 17:11:45.270967 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:11:45.271168 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:11:44 UTC (1757697104) Sep 12 17:11:45.271195 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:11:45.271215 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 12 17:11:45.271234 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 17:11:45.271253 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:11:45.271271 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:11:45.271297 kernel: Segment Routing with IPv6 Sep 12 17:11:45.271316 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:11:45.271336 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:11:45.271354 kernel: Key type dns_resolver registered Sep 12 17:11:45.271373 kernel: registered taskstats version 1 Sep 12 17:11:45.271392 kernel: Loading compiled-in X.509 certificates Sep 12 17:11:45.271411 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 2d576b5e69e6c5de2f731966fe8b55173c144d02' Sep 12 17:11:45.271430 kernel: Key type .fscrypt registered Sep 12 17:11:45.271449 kernel: Key type fscrypt-provisioning registered Sep 12 17:11:45.271514 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:11:45.271535 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:11:45.271554 kernel: ima: No architecture policies found Sep 12 17:11:45.271573 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:11:45.271592 kernel: clk: Disabling unused clocks Sep 12 17:11:45.271611 kernel: Freeing unused kernel memory: 39488K Sep 12 17:11:45.271630 kernel: Run /init as init process Sep 12 17:11:45.271648 kernel: with arguments: Sep 12 17:11:45.271667 kernel: /init Sep 12 17:11:45.271685 kernel: with environment: Sep 12 17:11:45.271709 kernel: HOME=/ Sep 12 17:11:45.271728 kernel: TERM=linux Sep 12 17:11:45.271746 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:11:45.271769 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:11:45.271793 systemd[1]: Detected virtualization amazon. Sep 12 17:11:45.271814 systemd[1]: Detected architecture arm64. Sep 12 17:11:45.271834 systemd[1]: Running in initrd. Sep 12 17:11:45.271859 systemd[1]: No hostname configured, using default hostname. Sep 12 17:11:45.271879 systemd[1]: Hostname set to . Sep 12 17:11:45.271900 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:11:45.271920 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:11:45.271940 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:11:45.271961 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:11:45.271983 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:11:45.272005 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:11:45.272030 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:11:45.272052 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:11:45.272075 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:11:45.272096 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:11:45.272117 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:11:45.272137 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:11:45.272157 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:11:45.272182 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:11:45.272202 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:11:45.272223 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:11:45.272243 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:11:45.272263 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:11:45.272283 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:11:45.272304 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:11:45.272324 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:11:45.272344 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:11:45.272370 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:11:45.272390 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:11:45.272410 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:11:45.272431 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:11:45.272451 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:11:45.276184 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:11:45.276210 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:11:45.276231 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:11:45.276263 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:11:45.276285 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:11:45.276306 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:11:45.276326 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:11:45.276395 systemd-journald[250]: Collecting audit messages is disabled. Sep 12 17:11:45.276446 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:11:45.276508 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:11:45.276532 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:11:45.276554 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:11:45.276581 kernel: Bridge firewalling registered Sep 12 17:11:45.276602 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:11:45.276622 systemd-journald[250]: Journal started Sep 12 17:11:45.276660 systemd-journald[250]: Runtime Journal (/run/log/journal/ec299692df6e82a29e21207561442b19) is 8.0M, max 75.3M, 67.3M free. Sep 12 17:11:45.280278 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:11:45.224073 systemd-modules-load[251]: Inserted module 'overlay' Sep 12 17:11:45.264727 systemd-modules-load[251]: Inserted module 'br_netfilter' Sep 12 17:11:45.291876 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:11:45.288508 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:11:45.314768 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:11:45.330856 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:11:45.340368 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:11:45.354545 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:11:45.360309 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:11:45.369857 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:11:45.380907 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:11:45.394780 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:11:45.426670 dracut-cmdline[284]: dracut-dracut-053 Sep 12 17:11:45.434660 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:11:45.485621 systemd-resolved[287]: Positive Trust Anchors: Sep 12 17:11:45.485658 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:11:45.485735 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:11:45.570487 kernel: SCSI subsystem initialized Sep 12 17:11:45.577495 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:11:45.589492 kernel: iscsi: registered transport (tcp) Sep 12 17:11:45.612203 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:11:45.612276 kernel: QLogic iSCSI HBA Driver Sep 12 17:11:45.714485 kernel: random: crng init done Sep 12 17:11:45.714729 systemd-resolved[287]: Defaulting to hostname 'linux'. Sep 12 17:11:45.720799 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:11:45.728551 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:11:45.742167 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:11:45.753736 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:11:45.795521 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:11:45.795608 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:11:45.795636 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:11:45.862503 kernel: raid6: neonx8 gen() 6628 MB/s Sep 12 17:11:45.879492 kernel: raid6: neonx4 gen() 6495 MB/s Sep 12 17:11:45.896491 kernel: raid6: neonx2 gen() 5425 MB/s Sep 12 17:11:45.913490 kernel: raid6: neonx1 gen() 3934 MB/s Sep 12 17:11:45.930491 kernel: raid6: int64x8 gen() 3783 MB/s Sep 12 17:11:45.947491 kernel: raid6: int64x4 gen() 3707 MB/s Sep 12 17:11:45.964491 kernel: raid6: int64x2 gen() 3571 MB/s Sep 12 17:11:45.982475 kernel: raid6: int64x1 gen() 2761 MB/s Sep 12 17:11:45.982509 kernel: raid6: using algorithm neonx8 gen() 6628 MB/s Sep 12 17:11:46.000420 kernel: raid6: .... xor() 4910 MB/s, rmw enabled Sep 12 17:11:46.000492 kernel: raid6: using neon recovery algorithm Sep 12 17:11:46.008502 kernel: xor: measuring software checksum speed Sep 12 17:11:46.008561 kernel: 8regs : 10970 MB/sec Sep 12 17:11:46.011707 kernel: 32regs : 10713 MB/sec Sep 12 17:11:46.011739 kernel: arm64_neon : 9565 MB/sec Sep 12 17:11:46.011764 kernel: xor: using function: 8regs (10970 MB/sec) Sep 12 17:11:46.097514 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:11:46.116767 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:11:46.135809 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:11:46.169758 systemd-udevd[470]: Using default interface naming scheme 'v255'. Sep 12 17:11:46.177877 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:11:46.200725 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:11:46.230307 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Sep 12 17:11:46.287101 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:11:46.299800 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:11:46.420103 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:11:46.434866 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:11:46.484278 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:11:46.489077 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:11:46.502187 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:11:46.504885 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:11:46.522722 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:11:46.564641 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:11:46.612059 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:11:46.612124 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 12 17:11:46.620130 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 17:11:46.620440 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 17:11:46.634607 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:08:96:2b:ef:79 Sep 12 17:11:46.639090 (udev-worker)[537]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:11:46.652559 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:11:46.654650 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:11:46.666305 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:11:46.669120 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:11:46.670307 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:11:46.682172 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:11:46.690125 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 12 17:11:46.690164 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 17:11:46.696117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:11:46.702608 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 17:11:46.712588 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:11:46.712663 kernel: GPT:9289727 != 16777215 Sep 12 17:11:46.712689 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:11:46.712714 kernel: GPT:9289727 != 16777215 Sep 12 17:11:46.714368 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:11:46.714414 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:11:46.734577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:11:46.746717 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:11:46.795558 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:11:46.836362 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (540) Sep 12 17:11:46.851579 kernel: BTRFS: device fsid 5a23a06a-00d4-4606-89bf-13e31a563129 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (518) Sep 12 17:11:46.890104 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 17:11:46.939024 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 17:11:46.971423 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 17:11:46.978089 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 17:11:46.994268 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:11:47.011751 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:11:47.025404 disk-uuid[662]: Primary Header is updated. Sep 12 17:11:47.025404 disk-uuid[662]: Secondary Entries is updated. Sep 12 17:11:47.025404 disk-uuid[662]: Secondary Header is updated. Sep 12 17:11:47.036535 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:11:47.044535 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:11:47.054491 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:11:48.054539 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:11:48.058518 disk-uuid[664]: The operation has completed successfully. Sep 12 17:11:48.230372 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:11:48.231601 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:11:48.295728 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:11:48.306089 sh[1005]: Success Sep 12 17:11:48.330603 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 17:11:48.431718 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:11:48.448686 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:11:48.458543 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:11:48.490409 kernel: BTRFS info (device dm-0): first mount of filesystem 5a23a06a-00d4-4606-89bf-13e31a563129 Sep 12 17:11:48.490488 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:11:48.490518 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:11:48.492438 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:11:48.493882 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:11:48.583504 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:11:48.618419 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:11:48.622974 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:11:48.636712 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:11:48.646878 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:11:48.668554 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:11:48.668633 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:11:48.668665 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:11:48.685539 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:11:48.708235 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:11:48.711537 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:11:48.721120 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:11:48.731801 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:11:48.838921 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:11:48.857842 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:11:48.915782 systemd-networkd[1197]: lo: Link UP Sep 12 17:11:48.915803 systemd-networkd[1197]: lo: Gained carrier Sep 12 17:11:48.921002 systemd-networkd[1197]: Enumeration completed Sep 12 17:11:48.922383 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:11:48.922391 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:11:48.928004 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:11:48.930669 systemd[1]: Reached target network.target - Network. Sep 12 17:11:48.934105 systemd-networkd[1197]: eth0: Link UP Sep 12 17:11:48.934113 systemd-networkd[1197]: eth0: Gained carrier Sep 12 17:11:48.934130 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:11:48.950544 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.30.188/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:11:49.109255 ignition[1112]: Ignition 2.19.0 Sep 12 17:11:49.109283 ignition[1112]: Stage: fetch-offline Sep 12 17:11:49.113421 ignition[1112]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:11:49.113480 ignition[1112]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:11:49.117983 ignition[1112]: Ignition finished successfully Sep 12 17:11:49.120285 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:11:49.130805 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:11:49.167884 ignition[1207]: Ignition 2.19.0 Sep 12 17:11:49.167913 ignition[1207]: Stage: fetch Sep 12 17:11:49.168856 ignition[1207]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:11:49.168881 ignition[1207]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:11:49.169032 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:11:49.192618 ignition[1207]: PUT result: OK Sep 12 17:11:49.195938 ignition[1207]: parsed url from cmdline: "" Sep 12 17:11:49.195954 ignition[1207]: no config URL provided Sep 12 17:11:49.195970 ignition[1207]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:11:49.195996 ignition[1207]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:11:49.196033 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:11:49.200227 ignition[1207]: PUT result: OK Sep 12 17:11:49.200301 ignition[1207]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 17:11:49.208435 ignition[1207]: GET result: OK Sep 12 17:11:49.209068 ignition[1207]: parsing config with SHA512: 83913894fdbf87e02513917180747a6d54e2c3187364d62296990f95d6df28c9cafd9591c2b024fd0220dcfdab49ad30ab1857529e30a1a9977263a62d39a371 Sep 12 17:11:49.222854 unknown[1207]: fetched base config from "system" Sep 12 17:11:49.223104 unknown[1207]: fetched base config from "system" Sep 12 17:11:49.224219 ignition[1207]: fetch: fetch complete Sep 12 17:11:49.223119 unknown[1207]: fetched user config from "aws" Sep 12 17:11:49.224231 ignition[1207]: fetch: fetch passed Sep 12 17:11:49.224315 ignition[1207]: Ignition finished successfully Sep 12 17:11:49.237580 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:11:49.250876 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:11:49.275404 ignition[1213]: Ignition 2.19.0 Sep 12 17:11:49.275425 ignition[1213]: Stage: kargs Sep 12 17:11:49.277749 ignition[1213]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:11:49.277775 ignition[1213]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:11:49.277946 ignition[1213]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:11:49.281635 ignition[1213]: PUT result: OK Sep 12 17:11:49.290531 ignition[1213]: kargs: kargs passed Sep 12 17:11:49.290631 ignition[1213]: Ignition finished successfully Sep 12 17:11:49.295129 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:11:49.305961 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:11:49.333440 ignition[1219]: Ignition 2.19.0 Sep 12 17:11:49.333483 ignition[1219]: Stage: disks Sep 12 17:11:49.334127 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:11:49.334151 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:11:49.334303 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:11:49.344364 ignition[1219]: PUT result: OK Sep 12 17:11:49.348598 ignition[1219]: disks: disks passed Sep 12 17:11:49.348751 ignition[1219]: Ignition finished successfully Sep 12 17:11:49.350998 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:11:49.358391 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:11:49.361005 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:11:49.363762 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:11:49.365941 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:11:49.368091 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:11:49.382199 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:11:49.425370 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:11:49.431519 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:11:49.445768 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:11:49.524519 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fc6c61a7-153d-4e7f-95c0-bffdb4824d71 r/w with ordered data mode. Quota mode: none. Sep 12 17:11:49.526061 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:11:49.530142 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:11:49.547651 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:11:49.556841 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:11:49.561404 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:11:49.562018 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:11:49.562067 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:11:49.587489 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1247) Sep 12 17:11:49.592418 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:11:49.598829 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:11:49.599149 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:11:49.599293 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:11:49.610810 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:11:49.620862 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:11:49.622476 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:11:50.087757 initrd-setup-root[1272]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:11:50.110121 initrd-setup-root[1279]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:11:50.119411 initrd-setup-root[1286]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:11:50.128067 initrd-setup-root[1293]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:11:50.460287 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:11:50.471734 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:11:50.478693 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:11:50.495773 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:11:50.500326 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:11:50.543803 ignition[1361]: INFO : Ignition 2.19.0 Sep 12 17:11:50.543803 ignition[1361]: INFO : Stage: mount Sep 12 17:11:50.548767 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:11:50.554837 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:11:50.554837 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:11:50.554837 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:11:50.562672 ignition[1361]: INFO : PUT result: OK Sep 12 17:11:50.566940 ignition[1361]: INFO : mount: mount passed Sep 12 17:11:50.568697 ignition[1361]: INFO : Ignition finished successfully Sep 12 17:11:50.576334 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:11:50.585706 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:11:50.590613 systemd-networkd[1197]: eth0: Gained IPv6LL Sep 12 17:11:50.610867 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:11:50.635482 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1373) Sep 12 17:11:50.636538 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:11:50.639480 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:11:50.639516 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:11:50.645492 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:11:50.649934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:11:50.692196 ignition[1390]: INFO : Ignition 2.19.0 Sep 12 17:11:50.692196 ignition[1390]: INFO : Stage: files Sep 12 17:11:50.697486 ignition[1390]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:11:50.697486 ignition[1390]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:11:50.697486 ignition[1390]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:11:50.697486 ignition[1390]: INFO : PUT result: OK Sep 12 17:11:50.708831 ignition[1390]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:11:50.712675 ignition[1390]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:11:50.712675 ignition[1390]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:11:50.762727 ignition[1390]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:11:50.765938 ignition[1390]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:11:50.765938 ignition[1390]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:11:50.765494 unknown[1390]: wrote ssh authorized keys file for user: core Sep 12 17:11:50.774169 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:11:50.778401 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 12 17:11:50.848580 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:11:51.054552 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:11:51.059639 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:11:51.059639 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:11:51.059639 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:11:51.059639 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:11:51.059639 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:11:51.059639 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:11:51.059639 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:11:51.059639 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:11:51.059639 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:11:51.059639 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:11:51.059639 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:11:51.059639 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:11:51.059639 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:11:51.059639 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 12 17:11:51.513289 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:11:51.910181 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:11:51.910181 ignition[1390]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:11:51.918053 ignition[1390]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:11:51.918053 ignition[1390]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:11:51.918053 ignition[1390]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:11:51.918053 ignition[1390]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:11:51.918053 ignition[1390]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:11:51.918053 ignition[1390]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:11:51.918053 ignition[1390]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:11:51.918053 ignition[1390]: INFO : files: files passed Sep 12 17:11:51.918053 ignition[1390]: INFO : Ignition finished successfully Sep 12 17:11:51.946747 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:11:51.956885 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:11:51.966772 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:11:51.978030 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:11:51.980696 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:11:52.001921 initrd-setup-root-after-ignition[1419]: grep: Sep 12 17:11:52.001921 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:11:52.007605 initrd-setup-root-after-ignition[1419]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:11:52.007605 initrd-setup-root-after-ignition[1419]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:11:52.016296 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:11:52.022080 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:11:52.032757 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:11:52.089179 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:11:52.089631 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:11:52.091968 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:11:52.092378 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:11:52.093170 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:11:52.096589 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:11:52.143552 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:11:52.154955 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:11:52.181243 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:11:52.184778 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:11:52.190038 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:11:52.194154 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:11:52.194475 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:11:52.203403 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:11:52.205693 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:11:52.207755 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:11:52.210898 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:11:52.221821 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:11:52.225158 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:11:52.231336 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:11:52.234156 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:11:52.236915 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:11:52.245562 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:11:52.247502 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:11:52.247730 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:11:52.256861 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:11:52.261649 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:11:52.267204 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:11:52.271516 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:11:52.278581 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:11:52.280806 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:11:52.285829 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:11:52.286269 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:11:52.294838 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:11:52.295238 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:11:52.308814 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:11:52.310899 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:11:52.311176 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:11:52.337732 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:11:52.343213 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:11:52.343579 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:11:52.356925 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:11:52.357387 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:11:52.372097 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:11:52.377232 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:11:52.382048 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:11:52.388436 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:11:52.388657 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:11:52.407849 ignition[1443]: INFO : Ignition 2.19.0 Sep 12 17:11:52.407849 ignition[1443]: INFO : Stage: umount Sep 12 17:11:52.411451 ignition[1443]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:11:52.411451 ignition[1443]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:11:52.411451 ignition[1443]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:11:52.419230 ignition[1443]: INFO : PUT result: OK Sep 12 17:11:52.424213 ignition[1443]: INFO : umount: umount passed Sep 12 17:11:52.426101 ignition[1443]: INFO : Ignition finished successfully Sep 12 17:11:52.432525 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:11:52.432734 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:11:52.435734 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:11:52.435818 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:11:52.438393 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:11:52.438594 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:11:52.452903 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:11:52.452993 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:11:52.455518 systemd[1]: Stopped target network.target - Network. Sep 12 17:11:52.457364 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:11:52.457449 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:11:52.459978 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:11:52.461881 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:11:52.466664 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:11:52.469353 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:11:52.471315 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:11:52.473428 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:11:52.473526 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:11:52.476307 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:11:52.476378 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:11:52.480052 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:11:52.480282 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:11:52.486642 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:11:52.486725 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:11:52.493579 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:11:52.493661 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:11:52.496846 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:11:52.506919 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:11:52.513376 systemd-networkd[1197]: eth0: DHCPv6 lease lost Sep 12 17:11:52.527623 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:11:52.527935 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:11:52.539428 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:11:52.541121 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:11:52.557762 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:11:52.557895 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:11:52.568598 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:11:52.572725 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:11:52.572855 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:11:52.576448 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:11:52.576574 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:11:52.589885 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:11:52.589999 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:11:52.592335 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:11:52.592416 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:11:52.596573 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:11:52.625305 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:11:52.626034 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:11:52.640982 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:11:52.641162 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:11:52.648253 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:11:52.648340 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:11:52.657641 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:11:52.657757 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:11:52.670106 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:11:52.670223 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:11:52.676554 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:11:52.676655 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:11:52.688779 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:11:52.688953 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:11:52.689064 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:11:52.698951 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:11:52.699059 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:11:52.703283 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:11:52.703489 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:11:52.725695 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:11:52.726119 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:11:52.734966 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:11:52.744767 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:11:52.763662 systemd[1]: Switching root. Sep 12 17:11:52.828797 systemd-journald[250]: Journal stopped Sep 12 17:11:55.536816 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Sep 12 17:11:55.536939 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:11:55.536984 kernel: SELinux: policy capability open_perms=1 Sep 12 17:11:55.537018 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:11:55.537058 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:11:55.537088 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:11:55.537128 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:11:55.537160 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:11:55.537197 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:11:55.537228 kernel: audit: type=1403 audit(1757697113.422:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:11:55.537267 systemd[1]: Successfully loaded SELinux policy in 83.567ms. Sep 12 17:11:55.537305 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.270ms. Sep 12 17:11:55.537348 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:11:55.537382 systemd[1]: Detected virtualization amazon. Sep 12 17:11:55.537414 systemd[1]: Detected architecture arm64. Sep 12 17:11:55.537444 systemd[1]: Detected first boot. Sep 12 17:11:55.537499 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:11:55.537537 zram_generator::config[1486]: No configuration found. Sep 12 17:11:55.537573 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:11:55.537606 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:11:55.537639 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:11:55.537672 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:11:55.537706 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:11:55.537746 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:11:55.537777 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:11:55.537814 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:11:55.537862 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:11:55.537899 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:11:55.537934 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:11:55.537964 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:11:55.537996 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:11:55.538026 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:11:55.538056 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:11:55.538086 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:11:55.542537 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:11:55.542583 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:11:55.542617 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:11:55.542653 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:11:55.542685 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:11:55.542716 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:11:55.542749 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:11:55.542789 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:11:55.542821 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:11:55.542853 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:11:55.542887 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:11:55.542920 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:11:55.542951 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:11:55.542983 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:11:55.543016 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:11:55.543046 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:11:55.543077 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:11:55.543113 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:11:55.543144 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:11:55.543177 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:11:55.543208 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:11:55.543241 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:11:55.543274 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:11:55.543303 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:11:55.543337 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:11:55.543371 systemd[1]: Reached target machines.target - Containers. Sep 12 17:11:55.543407 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:11:55.543437 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:11:55.545550 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:11:55.545597 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:11:55.545632 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:11:55.545663 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:11:55.545694 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:11:55.545724 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:11:55.545761 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:11:55.545792 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:11:55.545822 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:11:55.545874 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:11:55.545906 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:11:55.545936 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:11:55.545966 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:11:55.545998 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:11:55.546028 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:11:55.546063 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:11:55.546094 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:11:55.546126 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:11:55.546156 systemd[1]: Stopped verity-setup.service. Sep 12 17:11:55.546187 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:11:55.546217 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:11:55.546247 kernel: loop: module loaded Sep 12 17:11:55.546281 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:11:55.546312 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:11:55.546341 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:11:55.546370 kernel: fuse: init (API version 7.39) Sep 12 17:11:55.546398 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:11:55.546428 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:11:55.547591 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:11:55.550503 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:11:55.550561 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:11:55.550593 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:11:55.550623 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:11:55.550653 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:11:55.550683 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:11:55.550764 systemd-journald[1571]: Collecting audit messages is disabled. Sep 12 17:11:55.550822 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:11:55.550854 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:11:55.550884 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:11:55.550914 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:11:55.550946 systemd-journald[1571]: Journal started Sep 12 17:11:55.551000 systemd-journald[1571]: Runtime Journal (/run/log/journal/ec299692df6e82a29e21207561442b19) is 8.0M, max 75.3M, 67.3M free. Sep 12 17:11:54.853123 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:11:54.967837 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 17:11:54.968654 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:11:55.573652 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:11:55.573727 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:11:55.563478 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:11:55.566741 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:11:55.599552 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:11:55.608558 kernel: ACPI: bus type drm_connector registered Sep 12 17:11:55.611811 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:11:55.618665 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:11:55.621162 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:11:55.621218 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:11:55.625310 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:11:55.638839 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:11:55.647436 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:11:55.649932 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:11:55.675705 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:11:55.681899 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:11:55.684600 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:11:55.686837 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:11:55.689510 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:11:55.694770 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:11:55.702788 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:11:55.712858 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:11:55.720795 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:11:55.721201 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:11:55.725019 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:11:55.727852 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:11:55.732536 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:11:55.793404 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:11:55.797501 kernel: loop0: detected capacity change from 0 to 211168 Sep 12 17:11:55.800826 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:11:55.830018 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:11:55.844053 systemd-journald[1571]: Time spent on flushing to /var/log/journal/ec299692df6e82a29e21207561442b19 is 129.393ms for 911 entries. Sep 12 17:11:55.844053 systemd-journald[1571]: System Journal (/var/log/journal/ec299692df6e82a29e21207561442b19) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:11:55.992274 systemd-journald[1571]: Received client request to flush runtime journal. Sep 12 17:11:55.992363 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:11:55.894442 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:11:55.898572 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:11:55.920651 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:11:55.938759 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:11:55.952820 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:11:55.999252 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:11:56.005571 kernel: loop1: detected capacity change from 0 to 114432 Sep 12 17:11:56.013622 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:11:56.026860 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:11:56.075079 systemd-tmpfiles[1631]: ACLs are not supported, ignoring. Sep 12 17:11:56.075654 systemd-tmpfiles[1631]: ACLs are not supported, ignoring. Sep 12 17:11:56.080799 udevadm[1636]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:11:56.086117 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:11:56.138754 kernel: loop2: detected capacity change from 0 to 114328 Sep 12 17:11:56.249677 kernel: loop3: detected capacity change from 0 to 52536 Sep 12 17:11:56.351490 kernel: loop4: detected capacity change from 0 to 211168 Sep 12 17:11:56.384539 kernel: loop5: detected capacity change from 0 to 114432 Sep 12 17:11:56.400485 kernel: loop6: detected capacity change from 0 to 114328 Sep 12 17:11:56.417509 kernel: loop7: detected capacity change from 0 to 52536 Sep 12 17:11:56.433419 (sd-merge)[1642]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 17:11:56.434474 (sd-merge)[1642]: Merged extensions into '/usr'. Sep 12 17:11:56.443748 systemd[1]: Reloading requested from client PID 1614 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:11:56.443783 systemd[1]: Reloading... Sep 12 17:11:56.650503 zram_generator::config[1668]: No configuration found. Sep 12 17:11:56.944225 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:11:57.068437 systemd[1]: Reloading finished in 623 ms. Sep 12 17:11:57.122521 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:11:57.127561 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:11:57.145132 systemd[1]: Starting ensure-sysext.service... Sep 12 17:11:57.162316 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:11:57.171071 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:11:57.187760 systemd[1]: Reloading requested from client PID 1720 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:11:57.187788 systemd[1]: Reloading... Sep 12 17:11:57.243529 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:11:57.244192 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:11:57.250170 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:11:57.251048 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Sep 12 17:11:57.253688 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Sep 12 17:11:57.254604 ldconfig[1609]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:11:57.265499 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:11:57.265533 systemd-tmpfiles[1721]: Skipping /boot Sep 12 17:11:57.301926 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:11:57.304541 systemd-tmpfiles[1721]: Skipping /boot Sep 12 17:11:57.324996 systemd-udevd[1722]: Using default interface naming scheme 'v255'. Sep 12 17:11:57.382504 zram_generator::config[1750]: No configuration found. Sep 12 17:11:57.563170 (udev-worker)[1763]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:11:57.776236 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:11:57.852494 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1787) Sep 12 17:11:57.956273 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:11:57.957600 systemd[1]: Reloading finished in 769 ms. Sep 12 17:11:57.992731 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:11:57.999052 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:11:58.005172 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:11:58.111730 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:11:58.117143 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:11:58.150740 systemd[1]: Finished ensure-sysext.service. Sep 12 17:11:58.164848 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:11:58.173784 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:11:58.178702 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:11:58.186908 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:11:58.197082 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:11:58.203389 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:11:58.210779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:11:58.220943 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:11:58.223674 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:11:58.228809 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:11:58.237503 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:11:58.246802 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:11:58.257915 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:11:58.260253 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:11:58.276809 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:11:58.285783 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:11:58.303572 lvm[1922]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:11:58.313084 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:11:58.313488 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:11:58.324563 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:11:58.336780 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:11:58.338905 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:11:58.346402 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:11:58.347933 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:11:58.357841 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:11:58.384219 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:11:58.389210 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:11:58.391547 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:11:58.396522 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:11:58.406393 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:11:58.414156 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:11:58.416950 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:11:58.418214 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:11:58.436095 augenrules[1955]: No rules Sep 12 17:11:58.443811 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:11:58.476904 lvm[1954]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:11:58.484593 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:11:58.496229 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:11:58.541963 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:11:58.555761 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:11:58.560016 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:11:58.562428 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:11:58.581586 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:11:58.643492 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:11:58.701641 systemd-networkd[1934]: lo: Link UP Sep 12 17:11:58.701663 systemd-networkd[1934]: lo: Gained carrier Sep 12 17:11:58.704390 systemd-networkd[1934]: Enumeration completed Sep 12 17:11:58.705619 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:11:58.710338 systemd-networkd[1934]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:11:58.710373 systemd-networkd[1934]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:11:58.714711 systemd-resolved[1935]: Positive Trust Anchors: Sep 12 17:11:58.715027 systemd-resolved[1935]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:11:58.715093 systemd-resolved[1935]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:11:58.716895 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:11:58.722482 systemd-networkd[1934]: eth0: Link UP Sep 12 17:11:58.722857 systemd-networkd[1934]: eth0: Gained carrier Sep 12 17:11:58.722904 systemd-networkd[1934]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:11:58.732766 systemd-resolved[1935]: Defaulting to hostname 'linux'. Sep 12 17:11:58.736156 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:11:58.738740 systemd[1]: Reached target network.target - Network. Sep 12 17:11:58.740741 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:11:58.743488 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:11:58.745971 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:11:58.748695 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:11:58.751750 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:11:58.754454 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:11:58.757504 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:11:58.760221 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:11:58.760259 systemd-networkd[1934]: eth0: DHCPv4 address 172.31.30.188/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:11:58.760280 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:11:58.762338 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:11:58.765797 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:11:58.774387 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:11:58.782891 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:11:58.786178 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:11:58.788660 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:11:58.790800 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:11:58.792841 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:11:58.792893 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:11:58.805844 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:11:58.814228 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:11:58.826998 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:11:58.831884 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:11:58.842766 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:11:58.844994 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:11:58.850923 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:11:58.862267 jq[1984]: false Sep 12 17:11:58.863937 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 17:11:58.871835 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:11:58.880887 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 17:11:58.890871 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:11:58.899249 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:11:58.912427 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:11:58.920418 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:11:58.924398 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:11:58.928803 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:11:58.938710 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:11:58.948228 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:11:58.949700 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:11:59.033676 dbus-daemon[1983]: [system] SELinux support is enabled Sep 12 17:11:59.049834 dbus-daemon[1983]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1934 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 17:11:59.039170 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:11:59.066392 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:11:59.060967 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:26:25 UTC 2025 (1): Starting Sep 12 17:11:59.068082 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:26:25 UTC 2025 (1): Starting Sep 12 17:11:59.068082 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:11:59.068082 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: ---------------------------------------------------- Sep 12 17:11:59.068082 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:11:59.068082 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:11:59.068082 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: corporation. Support and training for ntp-4 are Sep 12 17:11:59.068082 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: available at https://www.nwtime.org/support Sep 12 17:11:59.068082 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: ---------------------------------------------------- Sep 12 17:11:59.066776 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:11:59.061012 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:11:59.061034 ntpd[1987]: ---------------------------------------------------- Sep 12 17:11:59.061052 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:11:59.061072 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:11:59.094031 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: proto: precision = 0.108 usec (-23) Sep 12 17:11:59.094031 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: basedate set to 2025-08-31 Sep 12 17:11:59.094031 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: gps base set to 2025-08-31 (week 2382) Sep 12 17:11:59.086530 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:11:59.061091 ntpd[1987]: corporation. Support and training for ntp-4 are Sep 12 17:11:59.086651 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:11:59.061110 ntpd[1987]: available at https://www.nwtime.org/support Sep 12 17:11:59.061129 ntpd[1987]: ---------------------------------------------------- Sep 12 17:11:59.086553 ntpd[1987]: proto: precision = 0.108 usec (-23) Sep 12 17:11:59.090238 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:11:59.090790 ntpd[1987]: basedate set to 2025-08-31 Sep 12 17:11:59.090824 ntpd[1987]: gps base set to 2025-08-31 (week 2382) Sep 12 17:11:59.095434 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:11:59.101698 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:11:59.101698 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:11:59.100350 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:11:59.095503 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:11:59.100434 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:11:59.102715 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:11:59.107557 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:11:59.108245 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:11:59.108245 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: Listen normally on 3 eth0 172.31.30.188:123 Sep 12 17:11:59.108245 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: Listen normally on 4 lo [::1]:123 Sep 12 17:11:59.108245 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: bind(21) AF_INET6 fe80::408:96ff:fe2b:ef79%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:11:59.108245 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: unable to create socket on eth0 (5) for fe80::408:96ff:fe2b:ef79%2#123 Sep 12 17:11:59.108245 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: failed to init interface for address fe80::408:96ff:fe2b:ef79%2 Sep 12 17:11:59.108245 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Sep 12 17:11:59.102816 ntpd[1987]: Listen normally on 3 eth0 172.31.30.188:123 Sep 12 17:11:59.108580 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:11:59.102884 ntpd[1987]: Listen normally on 4 lo [::1]:123 Sep 12 17:11:59.102971 ntpd[1987]: bind(21) AF_INET6 fe80::408:96ff:fe2b:ef79%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:11:59.103011 ntpd[1987]: unable to create socket on eth0 (5) for fe80::408:96ff:fe2b:ef79%2#123 Sep 12 17:11:59.103045 ntpd[1987]: failed to init interface for address fe80::408:96ff:fe2b:ef79%2 Sep 12 17:11:59.103106 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Sep 12 17:11:59.127496 extend-filesystems[1985]: Found loop4 Sep 12 17:11:59.136759 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 17:11:59.147489 jq[1996]: true Sep 12 17:11:59.148004 extend-filesystems[1985]: Found loop5 Sep 12 17:11:59.151662 extend-filesystems[1985]: Found loop6 Sep 12 17:11:59.151662 extend-filesystems[1985]: Found loop7 Sep 12 17:11:59.154861 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:11:59.154861 ntpd[1987]: 12 Sep 17:11:59 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:11:59.151351 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:11:59.151401 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:11:59.155552 extend-filesystems[1985]: Found nvme0n1 Sep 12 17:11:59.158429 extend-filesystems[1985]: Found nvme0n1p1 Sep 12 17:11:59.164435 update_engine[1995]: I20250912 17:11:59.163921 1995 main.cc:92] Flatcar Update Engine starting Sep 12 17:11:59.171781 extend-filesystems[1985]: Found nvme0n1p2 Sep 12 17:11:59.171781 extend-filesystems[1985]: Found nvme0n1p3 Sep 12 17:11:59.171781 extend-filesystems[1985]: Found usr Sep 12 17:11:59.171781 extend-filesystems[1985]: Found nvme0n1p4 Sep 12 17:11:59.171781 extend-filesystems[1985]: Found nvme0n1p6 Sep 12 17:11:59.171781 extend-filesystems[1985]: Found nvme0n1p7 Sep 12 17:11:59.171781 extend-filesystems[1985]: Found nvme0n1p9 Sep 12 17:11:59.171781 extend-filesystems[1985]: Checking size of /dev/nvme0n1p9 Sep 12 17:11:59.199210 coreos-metadata[1982]: Sep 12 17:11:59.195 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:11:59.199773 tar[2009]: linux-arm64/LICENSE Sep 12 17:11:59.199773 tar[2009]: linux-arm64/helm Sep 12 17:11:59.196167 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:11:59.200030 (ntainerd)[2020]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:11:59.204765 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:11:59.210309 update_engine[1995]: I20250912 17:11:59.204749 1995 update_check_scheduler.cc:74] Next update check in 11m5s Sep 12 17:11:59.222514 coreos-metadata[1982]: Sep 12 17:11:59.214 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 17:11:59.224042 coreos-metadata[1982]: Sep 12 17:11:59.223 INFO Fetch successful Sep 12 17:11:59.224042 coreos-metadata[1982]: Sep 12 17:11:59.223 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 17:11:59.230959 coreos-metadata[1982]: Sep 12 17:11:59.230 INFO Fetch successful Sep 12 17:11:59.230959 coreos-metadata[1982]: Sep 12 17:11:59.230 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 17:11:59.237506 coreos-metadata[1982]: Sep 12 17:11:59.235 INFO Fetch successful Sep 12 17:11:59.237506 coreos-metadata[1982]: Sep 12 17:11:59.235 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 17:11:59.243297 coreos-metadata[1982]: Sep 12 17:11:59.240 INFO Fetch successful Sep 12 17:11:59.243297 coreos-metadata[1982]: Sep 12 17:11:59.240 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 17:11:59.245264 coreos-metadata[1982]: Sep 12 17:11:59.244 INFO Fetch failed with 404: resource not found Sep 12 17:11:59.245264 coreos-metadata[1982]: Sep 12 17:11:59.244 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 17:11:59.250010 coreos-metadata[1982]: Sep 12 17:11:59.249 INFO Fetch successful Sep 12 17:11:59.250010 coreos-metadata[1982]: Sep 12 17:11:59.249 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 17:11:59.262497 coreos-metadata[1982]: Sep 12 17:11:59.257 INFO Fetch successful Sep 12 17:11:59.262497 coreos-metadata[1982]: Sep 12 17:11:59.257 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 17:11:59.265380 coreos-metadata[1982]: Sep 12 17:11:59.265 INFO Fetch successful Sep 12 17:11:59.265380 coreos-metadata[1982]: Sep 12 17:11:59.265 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 17:11:59.267446 coreos-metadata[1982]: Sep 12 17:11:59.267 INFO Fetch successful Sep 12 17:11:59.267446 coreos-metadata[1982]: Sep 12 17:11:59.267 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 17:11:59.270635 jq[2025]: true Sep 12 17:11:59.273964 coreos-metadata[1982]: Sep 12 17:11:59.271 INFO Fetch successful Sep 12 17:11:59.310283 extend-filesystems[1985]: Resized partition /dev/nvme0n1p9 Sep 12 17:11:59.331879 extend-filesystems[2039]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:11:59.346346 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 17:11:59.378501 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 17:11:59.427015 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:11:59.445349 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:11:59.451561 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:11:59.513488 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 17:11:59.527709 extend-filesystems[2039]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 17:11:59.527709 extend-filesystems[2039]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:11:59.527709 extend-filesystems[2039]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 17:11:59.549552 extend-filesystems[1985]: Resized filesystem in /dev/nvme0n1p9 Sep 12 17:11:59.534484 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:11:59.535588 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:11:59.554103 systemd-logind[1992]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:11:59.554155 systemd-logind[1992]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 12 17:11:59.554531 systemd-logind[1992]: New seat seat0. Sep 12 17:11:59.556055 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:11:59.582001 bash[2061]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:11:59.582110 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:11:59.607183 systemd[1]: Starting sshkeys.service... Sep 12 17:11:59.634047 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:11:59.645868 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:11:59.690501 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1789) Sep 12 17:11:59.773497 coreos-metadata[2072]: Sep 12 17:11:59.771 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:11:59.775054 coreos-metadata[2072]: Sep 12 17:11:59.775 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 17:11:59.777703 coreos-metadata[2072]: Sep 12 17:11:59.775 INFO Fetch successful Sep 12 17:11:59.777703 coreos-metadata[2072]: Sep 12 17:11:59.776 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 17:11:59.777703 coreos-metadata[2072]: Sep 12 17:11:59.777 INFO Fetch successful Sep 12 17:11:59.780752 unknown[2072]: wrote ssh authorized keys file for user: core Sep 12 17:11:59.846657 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 17:11:59.847168 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 17:11:59.855662 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2022 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 17:11:59.865495 update-ssh-keys[2091]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:11:59.871135 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 17:11:59.874298 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:11:59.897305 systemd[1]: Finished sshkeys.service. Sep 12 17:12:00.014938 polkitd[2100]: Started polkitd version 121 Sep 12 17:12:00.023233 locksmithd[2027]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:12:00.053695 polkitd[2100]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 17:12:00.053843 polkitd[2100]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 17:12:00.060491 containerd[2020]: time="2025-09-12T17:12:00.058926033Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:12:00.060355 polkitd[2100]: Finished loading, compiling and executing 2 rules Sep 12 17:12:00.062525 ntpd[1987]: bind(24) AF_INET6 fe80::408:96ff:fe2b:ef79%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:12:00.062593 ntpd[1987]: unable to create socket on eth0 (6) for fe80::408:96ff:fe2b:ef79%2#123 Sep 12 17:12:00.062995 ntpd[1987]: 12 Sep 17:12:00 ntpd[1987]: bind(24) AF_INET6 fe80::408:96ff:fe2b:ef79%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:12:00.062995 ntpd[1987]: 12 Sep 17:12:00 ntpd[1987]: unable to create socket on eth0 (6) for fe80::408:96ff:fe2b:ef79%2#123 Sep 12 17:12:00.062995 ntpd[1987]: 12 Sep 17:12:00 ntpd[1987]: failed to init interface for address fe80::408:96ff:fe2b:ef79%2 Sep 12 17:12:00.062623 ntpd[1987]: failed to init interface for address fe80::408:96ff:fe2b:ef79%2 Sep 12 17:12:00.065867 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 17:12:00.067153 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 17:12:00.066358 polkitd[2100]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 17:12:00.116713 systemd-hostnamed[2022]: Hostname set to (transient) Sep 12 17:12:00.118418 systemd-resolved[1935]: System hostname changed to 'ip-172-31-30-188'. Sep 12 17:12:00.224830 containerd[2020]: time="2025-09-12T17:12:00.223227298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:12:00.235489 containerd[2020]: time="2025-09-12T17:12:00.232081246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:12:00.235489 containerd[2020]: time="2025-09-12T17:12:00.232148314Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:12:00.235489 containerd[2020]: time="2025-09-12T17:12:00.232184350Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:12:00.235489 containerd[2020]: time="2025-09-12T17:12:00.232515982Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:12:00.235489 containerd[2020]: time="2025-09-12T17:12:00.232553086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:12:00.235489 containerd[2020]: time="2025-09-12T17:12:00.232681258Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:12:00.235489 containerd[2020]: time="2025-09-12T17:12:00.232712410Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:12:00.235489 containerd[2020]: time="2025-09-12T17:12:00.234829450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:12:00.235489 containerd[2020]: time="2025-09-12T17:12:00.234867634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:12:00.235489 containerd[2020]: time="2025-09-12T17:12:00.234900190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:12:00.235489 containerd[2020]: time="2025-09-12T17:12:00.234925258Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:12:00.235985 containerd[2020]: time="2025-09-12T17:12:00.235107922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:12:00.236790 containerd[2020]: time="2025-09-12T17:12:00.236744866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:12:00.237738 containerd[2020]: time="2025-09-12T17:12:00.237695074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:12:00.239895 containerd[2020]: time="2025-09-12T17:12:00.239508442Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:12:00.239895 containerd[2020]: time="2025-09-12T17:12:00.239740522Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:12:00.239895 containerd[2020]: time="2025-09-12T17:12:00.239838190Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:12:00.250674 containerd[2020]: time="2025-09-12T17:12:00.248940250Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:12:00.250674 containerd[2020]: time="2025-09-12T17:12:00.249034966Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:12:00.250674 containerd[2020]: time="2025-09-12T17:12:00.249069358Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:12:00.250674 containerd[2020]: time="2025-09-12T17:12:00.249104494Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:12:00.250674 containerd[2020]: time="2025-09-12T17:12:00.249136366Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:12:00.250674 containerd[2020]: time="2025-09-12T17:12:00.249392650Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:12:00.250674 containerd[2020]: time="2025-09-12T17:12:00.249853378Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:12:00.250674 containerd[2020]: time="2025-09-12T17:12:00.250078318Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:12:00.250674 containerd[2020]: time="2025-09-12T17:12:00.250111102Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:12:00.250674 containerd[2020]: time="2025-09-12T17:12:00.250141294Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:12:00.250674 containerd[2020]: time="2025-09-12T17:12:00.250172362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:12:00.250674 containerd[2020]: time="2025-09-12T17:12:00.250203670Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:12:00.250674 containerd[2020]: time="2025-09-12T17:12:00.250234402Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:12:00.250674 containerd[2020]: time="2025-09-12T17:12:00.250265962Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:12:00.251297 containerd[2020]: time="2025-09-12T17:12:00.250297126Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:12:00.251297 containerd[2020]: time="2025-09-12T17:12:00.250326178Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:12:00.251297 containerd[2020]: time="2025-09-12T17:12:00.250355758Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:12:00.251297 containerd[2020]: time="2025-09-12T17:12:00.250383346Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:12:00.251297 containerd[2020]: time="2025-09-12T17:12:00.250424302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.254424 containerd[2020]: time="2025-09-12T17:12:00.253594366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.254424 containerd[2020]: time="2025-09-12T17:12:00.253678906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.254424 containerd[2020]: time="2025-09-12T17:12:00.253713154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.254424 containerd[2020]: time="2025-09-12T17:12:00.253743670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.254424 containerd[2020]: time="2025-09-12T17:12:00.253816978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.254424 containerd[2020]: time="2025-09-12T17:12:00.253851394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.254424 containerd[2020]: time="2025-09-12T17:12:00.253882402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.254424 containerd[2020]: time="2025-09-12T17:12:00.253913242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.254424 containerd[2020]: time="2025-09-12T17:12:00.253948570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.254424 containerd[2020]: time="2025-09-12T17:12:00.253977766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.254424 containerd[2020]: time="2025-09-12T17:12:00.254011594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.254424 containerd[2020]: time="2025-09-12T17:12:00.254041726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.254424 containerd[2020]: time="2025-09-12T17:12:00.254089570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:12:00.254424 containerd[2020]: time="2025-09-12T17:12:00.254137234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.254424 containerd[2020]: time="2025-09-12T17:12:00.254166142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.255119 containerd[2020]: time="2025-09-12T17:12:00.254192818Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:12:00.256350 containerd[2020]: time="2025-09-12T17:12:00.255218446Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:12:00.256350 containerd[2020]: time="2025-09-12T17:12:00.256030462Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:12:00.256350 containerd[2020]: time="2025-09-12T17:12:00.256065670Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:12:00.256697 containerd[2020]: time="2025-09-12T17:12:00.256290118Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:12:00.259654 containerd[2020]: time="2025-09-12T17:12:00.258483922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.259654 containerd[2020]: time="2025-09-12T17:12:00.258554422Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:12:00.259654 containerd[2020]: time="2025-09-12T17:12:00.258581902Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:12:00.259654 containerd[2020]: time="2025-09-12T17:12:00.258607402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:12:00.259895 containerd[2020]: time="2025-09-12T17:12:00.259260862Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:12:00.259895 containerd[2020]: time="2025-09-12T17:12:00.259368466Z" level=info msg="Connect containerd service" Sep 12 17:12:00.259895 containerd[2020]: time="2025-09-12T17:12:00.259429282Z" level=info msg="using legacy CRI server" Sep 12 17:12:00.259895 containerd[2020]: time="2025-09-12T17:12:00.259448506Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:12:00.268927 containerd[2020]: time="2025-09-12T17:12:00.266656750Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:12:00.268927 containerd[2020]: time="2025-09-12T17:12:00.268171702Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:12:00.272174 containerd[2020]: time="2025-09-12T17:12:00.270940846Z" level=info msg="Start subscribing containerd event" Sep 12 17:12:00.272174 containerd[2020]: time="2025-09-12T17:12:00.271050862Z" level=info msg="Start recovering state" Sep 12 17:12:00.272174 containerd[2020]: time="2025-09-12T17:12:00.271210990Z" level=info msg="Start event monitor" Sep 12 17:12:00.272174 containerd[2020]: time="2025-09-12T17:12:00.271238758Z" level=info msg="Start snapshots syncer" Sep 12 17:12:00.272174 containerd[2020]: time="2025-09-12T17:12:00.271262578Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:12:00.272174 containerd[2020]: time="2025-09-12T17:12:00.271292074Z" level=info msg="Start streaming server" Sep 12 17:12:00.285498 containerd[2020]: time="2025-09-12T17:12:00.276639862Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:12:00.287775 containerd[2020]: time="2025-09-12T17:12:00.285809362Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:12:00.287775 containerd[2020]: time="2025-09-12T17:12:00.285943678Z" level=info msg="containerd successfully booted in 0.230869s" Sep 12 17:12:00.286063 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:12:00.336906 sshd_keygen[2014]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:12:00.390724 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:12:00.402894 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:12:00.418940 systemd[1]: Started sshd@0-172.31.30.188:22-147.75.109.163:41438.service - OpenSSH per-connection server daemon (147.75.109.163:41438). Sep 12 17:12:00.443527 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:12:00.443935 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:12:00.453992 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:12:00.501558 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:12:00.520209 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:12:00.533665 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:12:00.536546 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:12:00.574738 systemd-networkd[1934]: eth0: Gained IPv6LL Sep 12 17:12:00.581617 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:12:00.586797 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:12:00.601053 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 17:12:00.617989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:12:00.633144 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:12:00.682022 sshd[2197]: Accepted publickey for core from 147.75.109.163 port 41438 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:00.690109 sshd[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:00.720097 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:12:00.738968 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:12:00.744542 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:12:00.757006 systemd-logind[1992]: New session 1 of user core. Sep 12 17:12:00.788627 amazon-ssm-agent[2207]: Initializing new seelog logger Sep 12 17:12:00.790895 amazon-ssm-agent[2207]: New Seelog Logger Creation Complete Sep 12 17:12:00.790895 amazon-ssm-agent[2207]: 2025/09/12 17:12:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:12:00.790895 amazon-ssm-agent[2207]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:12:00.790895 amazon-ssm-agent[2207]: 2025/09/12 17:12:00 processing appconfig overrides Sep 12 17:12:00.795863 amazon-ssm-agent[2207]: 2025-09-12 17:12:00 INFO Proxy environment variables: Sep 12 17:12:00.798100 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:12:00.801489 amazon-ssm-agent[2207]: 2025/09/12 17:12:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:12:00.801489 amazon-ssm-agent[2207]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:12:00.801489 amazon-ssm-agent[2207]: 2025/09/12 17:12:00 processing appconfig overrides Sep 12 17:12:00.801489 amazon-ssm-agent[2207]: 2025/09/12 17:12:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:12:00.803189 amazon-ssm-agent[2207]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:12:00.803449 amazon-ssm-agent[2207]: 2025/09/12 17:12:00 processing appconfig overrides Sep 12 17:12:00.813439 amazon-ssm-agent[2207]: 2025/09/12 17:12:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:12:00.813439 amazon-ssm-agent[2207]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:12:00.815678 amazon-ssm-agent[2207]: 2025/09/12 17:12:00 processing appconfig overrides Sep 12 17:12:00.817141 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:12:00.841481 (systemd)[2224]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:12:00.897190 amazon-ssm-agent[2207]: 2025-09-12 17:12:00 INFO https_proxy: Sep 12 17:12:00.999555 amazon-ssm-agent[2207]: 2025-09-12 17:12:00 INFO http_proxy: Sep 12 17:12:01.097150 amazon-ssm-agent[2207]: 2025-09-12 17:12:00 INFO no_proxy: Sep 12 17:12:01.155907 systemd[2224]: Queued start job for default target default.target. Sep 12 17:12:01.161572 systemd[2224]: Created slice app.slice - User Application Slice. Sep 12 17:12:01.161635 systemd[2224]: Reached target paths.target - Paths. Sep 12 17:12:01.161667 systemd[2224]: Reached target timers.target - Timers. Sep 12 17:12:01.172727 systemd[2224]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:12:01.196446 amazon-ssm-agent[2207]: 2025-09-12 17:12:00 INFO Checking if agent identity type OnPrem can be assumed Sep 12 17:12:01.201548 tar[2009]: linux-arm64/README.md Sep 12 17:12:01.207324 systemd[2224]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:12:01.208661 systemd[2224]: Reached target sockets.target - Sockets. Sep 12 17:12:01.208717 systemd[2224]: Reached target basic.target - Basic System. Sep 12 17:12:01.208805 systemd[2224]: Reached target default.target - Main User Target. Sep 12 17:12:01.208869 systemd[2224]: Startup finished in 349ms. Sep 12 17:12:01.219242 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:12:01.225535 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:12:01.250573 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:12:01.295494 amazon-ssm-agent[2207]: 2025-09-12 17:12:00 INFO Checking if agent identity type EC2 can be assumed Sep 12 17:12:01.395009 systemd[1]: Started sshd@1-172.31.30.188:22-147.75.109.163:55486.service - OpenSSH per-connection server daemon (147.75.109.163:55486). Sep 12 17:12:01.403682 amazon-ssm-agent[2207]: 2025-09-12 17:12:00 INFO Agent will take identity from EC2 Sep 12 17:12:01.497000 amazon-ssm-agent[2207]: 2025-09-12 17:12:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:12:01.596809 amazon-ssm-agent[2207]: 2025-09-12 17:12:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:12:01.609748 sshd[2241]: Accepted publickey for core from 147.75.109.163 port 55486 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:01.612426 sshd[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:01.621060 systemd-logind[1992]: New session 2 of user core. Sep 12 17:12:01.629756 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:12:01.696563 amazon-ssm-agent[2207]: 2025-09-12 17:12:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:12:01.763765 sshd[2241]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:01.769795 systemd[1]: sshd@1-172.31.30.188:22-147.75.109.163:55486.service: Deactivated successfully. Sep 12 17:12:01.770554 systemd-logind[1992]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:12:01.776479 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:12:01.796614 amazon-ssm-agent[2207]: 2025-09-12 17:12:00 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 12 17:12:01.801194 systemd-logind[1992]: Removed session 2. Sep 12 17:12:01.804178 systemd[1]: Started sshd@2-172.31.30.188:22-147.75.109.163:55488.service - OpenSSH per-connection server daemon (147.75.109.163:55488). Sep 12 17:12:01.897573 amazon-ssm-agent[2207]: 2025-09-12 17:12:00 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 12 17:12:01.996947 amazon-ssm-agent[2207]: 2025-09-12 17:12:00 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 17:12:02.006485 sshd[2248]: Accepted publickey for core from 147.75.109.163 port 55488 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:02.009573 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:02.017240 systemd-logind[1992]: New session 3 of user core. Sep 12 17:12:02.023730 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:12:02.074102 amazon-ssm-agent[2207]: 2025-09-12 17:12:00 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 12 17:12:02.074102 amazon-ssm-agent[2207]: 2025-09-12 17:12:00 INFO [Registrar] Starting registrar module Sep 12 17:12:02.074102 amazon-ssm-agent[2207]: 2025-09-12 17:12:00 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 12 17:12:02.074102 amazon-ssm-agent[2207]: 2025-09-12 17:12:02 INFO [EC2Identity] EC2 registration was successful. Sep 12 17:12:02.074650 amazon-ssm-agent[2207]: 2025-09-12 17:12:02 INFO [CredentialRefresher] credentialRefresher has started Sep 12 17:12:02.074650 amazon-ssm-agent[2207]: 2025-09-12 17:12:02 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 17:12:02.074650 amazon-ssm-agent[2207]: 2025-09-12 17:12:02 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 17:12:02.097660 amazon-ssm-agent[2207]: 2025-09-12 17:12:02 INFO [CredentialRefresher] Next credential rotation will be in 31.191657526366665 minutes Sep 12 17:12:02.152141 sshd[2248]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:02.157833 systemd[1]: sshd@2-172.31.30.188:22-147.75.109.163:55488.service: Deactivated successfully. Sep 12 17:12:02.162690 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:12:02.168212 systemd-logind[1992]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:12:02.170372 systemd-logind[1992]: Removed session 3. Sep 12 17:12:03.062063 ntpd[1987]: Listen normally on 7 eth0 [fe80::408:96ff:fe2b:ef79%2]:123 Sep 12 17:12:03.064884 ntpd[1987]: 12 Sep 17:12:03 ntpd[1987]: Listen normally on 7 eth0 [fe80::408:96ff:fe2b:ef79%2]:123 Sep 12 17:12:03.087726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:03.096702 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:12:03.102193 systemd[1]: Startup finished in 1.179s (kernel) + 8.576s (initrd) + 9.763s (userspace) = 19.519s. Sep 12 17:12:03.110527 amazon-ssm-agent[2207]: 2025-09-12 17:12:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 17:12:03.125070 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:12:03.209431 amazon-ssm-agent[2207]: 2025-09-12 17:12:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2262) started Sep 12 17:12:03.309827 amazon-ssm-agent[2207]: 2025-09-12 17:12:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 17:12:04.346421 kubelet[2260]: E0912 17:12:04.346326 2260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:12:04.359045 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:12:04.359373 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:12:04.361671 systemd[1]: kubelet.service: Consumed 1.397s CPU time. Sep 12 17:12:06.527381 systemd-resolved[1935]: Clock change detected. Flushing caches. Sep 12 17:12:12.656129 systemd[1]: Started sshd@3-172.31.30.188:22-147.75.109.163:34752.service - OpenSSH per-connection server daemon (147.75.109.163:34752). Sep 12 17:12:12.820828 sshd[2284]: Accepted publickey for core from 147.75.109.163 port 34752 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:12.823393 sshd[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:12.832943 systemd-logind[1992]: New session 4 of user core. Sep 12 17:12:12.840895 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:12:12.965560 sshd[2284]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:12.972295 systemd[1]: sshd@3-172.31.30.188:22-147.75.109.163:34752.service: Deactivated successfully. Sep 12 17:12:12.975532 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:12:12.976864 systemd-logind[1992]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:12:12.978715 systemd-logind[1992]: Removed session 4. Sep 12 17:12:13.005135 systemd[1]: Started sshd@4-172.31.30.188:22-147.75.109.163:34762.service - OpenSSH per-connection server daemon (147.75.109.163:34762). Sep 12 17:12:13.170469 sshd[2291]: Accepted publickey for core from 147.75.109.163 port 34762 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:13.173078 sshd[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:13.180457 systemd-logind[1992]: New session 5 of user core. Sep 12 17:12:13.191841 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:12:13.308402 sshd[2291]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:13.315644 systemd[1]: sshd@4-172.31.30.188:22-147.75.109.163:34762.service: Deactivated successfully. Sep 12 17:12:13.319908 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:12:13.321506 systemd-logind[1992]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:12:13.323262 systemd-logind[1992]: Removed session 5. Sep 12 17:12:13.346121 systemd[1]: Started sshd@5-172.31.30.188:22-147.75.109.163:34770.service - OpenSSH per-connection server daemon (147.75.109.163:34770). Sep 12 17:12:13.518311 sshd[2298]: Accepted publickey for core from 147.75.109.163 port 34770 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:13.520902 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:13.529367 systemd-logind[1992]: New session 6 of user core. Sep 12 17:12:13.538903 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:12:13.664424 sshd[2298]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:13.670577 systemd[1]: sshd@5-172.31.30.188:22-147.75.109.163:34770.service: Deactivated successfully. Sep 12 17:12:13.674265 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:12:13.677284 systemd-logind[1992]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:12:13.679233 systemd-logind[1992]: Removed session 6. Sep 12 17:12:13.705119 systemd[1]: Started sshd@6-172.31.30.188:22-147.75.109.163:34776.service - OpenSSH per-connection server daemon (147.75.109.163:34776). Sep 12 17:12:13.869059 sshd[2305]: Accepted publickey for core from 147.75.109.163 port 34776 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:13.871668 sshd[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:13.879077 systemd-logind[1992]: New session 7 of user core. Sep 12 17:12:13.887859 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:12:14.027544 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:12:14.028274 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:12:14.043230 sudo[2308]: pam_unix(sudo:session): session closed for user root Sep 12 17:12:14.066722 sshd[2305]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:14.073798 systemd[1]: sshd@6-172.31.30.188:22-147.75.109.163:34776.service: Deactivated successfully. Sep 12 17:12:14.077864 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:12:14.081052 systemd-logind[1992]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:12:14.083108 systemd-logind[1992]: Removed session 7. Sep 12 17:12:14.105156 systemd[1]: Started sshd@7-172.31.30.188:22-147.75.109.163:34786.service - OpenSSH per-connection server daemon (147.75.109.163:34786). Sep 12 17:12:14.274857 sshd[2313]: Accepted publickey for core from 147.75.109.163 port 34786 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:14.277475 sshd[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:14.285835 systemd-logind[1992]: New session 8 of user core. Sep 12 17:12:14.293867 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:12:14.399058 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:12:14.399730 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:12:14.405641 sudo[2317]: pam_unix(sudo:session): session closed for user root Sep 12 17:12:14.415712 sudo[2316]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:12:14.416336 sudo[2316]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:12:14.442082 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:12:14.445565 auditctl[2320]: No rules Sep 12 17:12:14.447420 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:12:14.448333 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:12:14.464430 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:12:14.504152 augenrules[2338]: No rules Sep 12 17:12:14.506973 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:12:14.509376 sudo[2316]: pam_unix(sudo:session): session closed for user root Sep 12 17:12:14.532870 sshd[2313]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:14.537511 systemd[1]: sshd@7-172.31.30.188:22-147.75.109.163:34786.service: Deactivated successfully. Sep 12 17:12:14.540744 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:12:14.544293 systemd-logind[1992]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:12:14.546285 systemd-logind[1992]: Removed session 8. Sep 12 17:12:14.578334 systemd[1]: Started sshd@8-172.31.30.188:22-147.75.109.163:34788.service - OpenSSH per-connection server daemon (147.75.109.163:34788). Sep 12 17:12:14.745504 sshd[2346]: Accepted publickey for core from 147.75.109.163 port 34788 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:14.748072 sshd[2346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:14.755453 systemd-logind[1992]: New session 9 of user core. Sep 12 17:12:14.766869 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:12:14.869841 sudo[2349]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:12:14.870449 sudo[2349]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:12:14.872498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:12:14.882017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:12:15.344968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:15.349418 (kubelet)[2368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:12:15.433153 kubelet[2368]: E0912 17:12:15.433095 2368 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:12:15.441223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:12:15.441776 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:12:15.576106 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:12:15.578870 (dockerd)[2380]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:12:16.074568 dockerd[2380]: time="2025-09-12T17:12:16.074470491Z" level=info msg="Starting up" Sep 12 17:12:16.190497 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3723794084-merged.mount: Deactivated successfully. Sep 12 17:12:16.234452 dockerd[2380]: time="2025-09-12T17:12:16.234118300Z" level=info msg="Loading containers: start." Sep 12 17:12:16.395839 kernel: Initializing XFRM netlink socket Sep 12 17:12:16.428488 (udev-worker)[2403]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:12:16.508831 systemd-networkd[1934]: docker0: Link UP Sep 12 17:12:16.538998 dockerd[2380]: time="2025-09-12T17:12:16.538926761Z" level=info msg="Loading containers: done." Sep 12 17:12:16.570055 dockerd[2380]: time="2025-09-12T17:12:16.569973689Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:12:16.570289 dockerd[2380]: time="2025-09-12T17:12:16.570179081Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:12:16.570411 dockerd[2380]: time="2025-09-12T17:12:16.570368561Z" level=info msg="Daemon has completed initialization" Sep 12 17:12:16.645612 dockerd[2380]: time="2025-09-12T17:12:16.644558790Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:12:16.646703 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:12:17.180556 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2233096535-merged.mount: Deactivated successfully. Sep 12 17:12:18.022057 containerd[2020]: time="2025-09-12T17:12:18.021988325Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 17:12:18.758962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1417495842.mount: Deactivated successfully. Sep 12 17:12:20.317435 containerd[2020]: time="2025-09-12T17:12:20.315324332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:20.317435 containerd[2020]: time="2025-09-12T17:12:20.316882316Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390228" Sep 12 17:12:20.318322 containerd[2020]: time="2025-09-12T17:12:20.318274508Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:20.324713 containerd[2020]: time="2025-09-12T17:12:20.324659432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:20.327052 containerd[2020]: time="2025-09-12T17:12:20.326978732Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 2.304846215s" Sep 12 17:12:20.327172 containerd[2020]: time="2025-09-12T17:12:20.327053684Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 12 17:12:20.329925 containerd[2020]: time="2025-09-12T17:12:20.329879480Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 17:12:21.975629 containerd[2020]: time="2025-09-12T17:12:21.973856964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:21.976690 containerd[2020]: time="2025-09-12T17:12:21.976647312Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547917" Sep 12 17:12:21.978272 containerd[2020]: time="2025-09-12T17:12:21.978222276Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:21.988389 containerd[2020]: time="2025-09-12T17:12:21.988311852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:21.991147 containerd[2020]: time="2025-09-12T17:12:21.991089936Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.660991156s" Sep 12 17:12:21.991336 containerd[2020]: time="2025-09-12T17:12:21.991303584Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 12 17:12:21.992112 containerd[2020]: time="2025-09-12T17:12:21.992071440Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 17:12:23.263064 containerd[2020]: time="2025-09-12T17:12:23.263006291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:23.265975 containerd[2020]: time="2025-09-12T17:12:23.265930367Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295977" Sep 12 17:12:23.267120 containerd[2020]: time="2025-09-12T17:12:23.267079655Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:23.274792 containerd[2020]: time="2025-09-12T17:12:23.274739279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:23.276379 containerd[2020]: time="2025-09-12T17:12:23.276319223Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.283657947s" Sep 12 17:12:23.276787 containerd[2020]: time="2025-09-12T17:12:23.276381251Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 12 17:12:23.277034 containerd[2020]: time="2025-09-12T17:12:23.276981311Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 17:12:24.650582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1577607853.mount: Deactivated successfully. Sep 12 17:12:25.209388 containerd[2020]: time="2025-09-12T17:12:25.208012500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:25.209388 containerd[2020]: time="2025-09-12T17:12:25.209331612Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240106" Sep 12 17:12:25.210587 containerd[2020]: time="2025-09-12T17:12:25.210510864Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:25.214061 containerd[2020]: time="2025-09-12T17:12:25.213980928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:25.215529 containerd[2020]: time="2025-09-12T17:12:25.215476944Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.938435033s" Sep 12 17:12:25.215843 containerd[2020]: time="2025-09-12T17:12:25.215689584Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 12 17:12:25.216476 containerd[2020]: time="2025-09-12T17:12:25.216430056Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 17:12:25.692097 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:12:25.702119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:12:25.758434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount877317029.mount: Deactivated successfully. Sep 12 17:12:26.133108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:26.142282 (kubelet)[2615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:12:26.288626 kubelet[2615]: E0912 17:12:26.286974 2615 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:12:26.294774 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:12:26.295121 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:12:27.214901 containerd[2020]: time="2025-09-12T17:12:27.214841126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:27.221061 containerd[2020]: time="2025-09-12T17:12:27.220516790Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Sep 12 17:12:27.226654 containerd[2020]: time="2025-09-12T17:12:27.224686214Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:27.234463 containerd[2020]: time="2025-09-12T17:12:27.234380498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:27.237046 containerd[2020]: time="2025-09-12T17:12:27.236779910Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.020288378s" Sep 12 17:12:27.237046 containerd[2020]: time="2025-09-12T17:12:27.236843858Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 12 17:12:27.238417 containerd[2020]: time="2025-09-12T17:12:27.238158326Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:12:27.728800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2696854670.mount: Deactivated successfully. Sep 12 17:12:27.742652 containerd[2020]: time="2025-09-12T17:12:27.741865889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:27.743987 containerd[2020]: time="2025-09-12T17:12:27.743923073Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 17:12:27.746547 containerd[2020]: time="2025-09-12T17:12:27.746466569Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:27.751819 containerd[2020]: time="2025-09-12T17:12:27.751725689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:27.754699 containerd[2020]: time="2025-09-12T17:12:27.753476705Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 515.265603ms" Sep 12 17:12:27.754699 containerd[2020]: time="2025-09-12T17:12:27.753531677Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:12:27.755663 containerd[2020]: time="2025-09-12T17:12:27.755355269Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 17:12:28.291848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount756956794.mount: Deactivated successfully. Sep 12 17:12:30.618038 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 17:12:30.768133 containerd[2020]: time="2025-09-12T17:12:30.768043364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:30.803836 containerd[2020]: time="2025-09-12T17:12:30.803760872Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465857" Sep 12 17:12:30.823263 containerd[2020]: time="2025-09-12T17:12:30.823182896Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:30.832629 containerd[2020]: time="2025-09-12T17:12:30.832215776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:30.835205 containerd[2020]: time="2025-09-12T17:12:30.835141448Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.079718139s" Sep 12 17:12:30.835330 containerd[2020]: time="2025-09-12T17:12:30.835204580Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 12 17:12:36.432504 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:12:36.441988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:12:36.783862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:36.797505 (kubelet)[2753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:12:36.875800 kubelet[2753]: E0912 17:12:36.875741 2753 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:12:36.881161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:12:36.882075 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:12:39.005332 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:39.018101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:12:39.070917 systemd[1]: Reloading requested from client PID 2768 ('systemctl') (unit session-9.scope)... Sep 12 17:12:39.071116 systemd[1]: Reloading... Sep 12 17:12:39.323653 zram_generator::config[2814]: No configuration found. Sep 12 17:12:39.555151 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:12:39.728514 systemd[1]: Reloading finished in 656 ms. Sep 12 17:12:39.822528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:39.829501 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:12:39.835036 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:12:39.835506 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:39.850699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:12:40.160896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:40.163241 (kubelet)[2873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:12:40.242182 kubelet[2873]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:12:40.242182 kubelet[2873]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:12:40.242182 kubelet[2873]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:12:40.242802 kubelet[2873]: I0912 17:12:40.242281 2873 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:12:41.549653 kubelet[2873]: I0912 17:12:41.547497 2873 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:12:41.549653 kubelet[2873]: I0912 17:12:41.548022 2873 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:12:41.549653 kubelet[2873]: I0912 17:12:41.548784 2873 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:12:41.589896 kubelet[2873]: E0912 17:12:41.589839 2873 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.30.188:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.188:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:12:41.591763 kubelet[2873]: I0912 17:12:41.591726 2873 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:12:41.605562 kubelet[2873]: E0912 17:12:41.605481 2873 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:12:41.605562 kubelet[2873]: I0912 17:12:41.605552 2873 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:12:41.612372 kubelet[2873]: I0912 17:12:41.610664 2873 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:12:41.612372 kubelet[2873]: I0912 17:12:41.611295 2873 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:12:41.612372 kubelet[2873]: I0912 17:12:41.611332 2873 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-188","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:12:41.612372 kubelet[2873]: I0912 17:12:41.611736 2873 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:12:41.612769 kubelet[2873]: I0912 17:12:41.611756 2873 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:12:41.612769 kubelet[2873]: I0912 17:12:41.612080 2873 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:12:41.618628 kubelet[2873]: I0912 17:12:41.618567 2873 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:12:41.618796 kubelet[2873]: I0912 17:12:41.618775 2873 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:12:41.622763 kubelet[2873]: I0912 17:12:41.622728 2873 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:12:41.622912 kubelet[2873]: I0912 17:12:41.622892 2873 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:12:41.627267 kubelet[2873]: E0912 17:12:41.627185 2873 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.30.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-188&limit=500&resourceVersion=0\": dial tcp 172.31.30.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:12:41.627468 kubelet[2873]: I0912 17:12:41.627428 2873 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:12:41.629089 kubelet[2873]: I0912 17:12:41.629039 2873 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:12:41.629353 kubelet[2873]: W0912 17:12:41.629280 2873 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:12:41.639326 kubelet[2873]: I0912 17:12:41.639266 2873 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:12:41.639467 kubelet[2873]: I0912 17:12:41.639374 2873 server.go:1289] "Started kubelet" Sep 12 17:12:41.643092 kubelet[2873]: E0912 17:12:41.643038 2873 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.30.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:12:41.643434 kubelet[2873]: I0912 17:12:41.643382 2873 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:12:41.647992 kubelet[2873]: I0912 17:12:41.647889 2873 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:12:41.648498 kubelet[2873]: I0912 17:12:41.648450 2873 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:12:41.649640 kubelet[2873]: I0912 17:12:41.649578 2873 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:12:41.661409 kubelet[2873]: E0912 17:12:41.653525 2873 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.188:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.188:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-188.186498417e39217e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-188,UID:ip-172-31-30-188,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-188,},FirstTimestamp:2025-09-12 17:12:41.639305598 +0000 UTC m=+1.468072784,LastTimestamp:2025-09-12 17:12:41.639305598 +0000 UTC m=+1.468072784,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-188,}" Sep 12 17:12:41.662310 kubelet[2873]: I0912 17:12:41.662251 2873 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:12:41.662785 kubelet[2873]: I0912 17:12:41.662761 2873 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:12:41.668365 kubelet[2873]: I0912 17:12:41.668313 2873 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:12:41.669389 kubelet[2873]: I0912 17:12:41.669342 2873 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:12:41.669531 kubelet[2873]: I0912 17:12:41.669485 2873 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:12:41.670107 kubelet[2873]: E0912 17:12:41.662927 2873 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-188\" not found" Sep 12 17:12:41.670369 kubelet[2873]: E0912 17:12:41.670321 2873 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.30.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:12:41.670752 kubelet[2873]: E0912 17:12:41.670690 2873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-188?timeout=10s\": dial tcp 172.31.30.188:6443: connect: connection refused" interval="200ms" Sep 12 17:12:41.671358 kubelet[2873]: I0912 17:12:41.671301 2873 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:12:41.671530 kubelet[2873]: I0912 17:12:41.671476 2873 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:12:41.673820 kubelet[2873]: I0912 17:12:41.673771 2873 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:12:41.704700 kubelet[2873]: E0912 17:12:41.704119 2873 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:12:41.708573 kubelet[2873]: I0912 17:12:41.708503 2873 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:12:41.711644 kubelet[2873]: I0912 17:12:41.711551 2873 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:12:41.711644 kubelet[2873]: I0912 17:12:41.711626 2873 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:12:41.711820 kubelet[2873]: I0912 17:12:41.711711 2873 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:12:41.711820 kubelet[2873]: I0912 17:12:41.711730 2873 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:12:41.711930 kubelet[2873]: E0912 17:12:41.711817 2873 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:12:41.713850 kubelet[2873]: E0912 17:12:41.713780 2873 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.30.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:12:41.720481 kubelet[2873]: I0912 17:12:41.720449 2873 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:12:41.720823 kubelet[2873]: I0912 17:12:41.720799 2873 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:12:41.720958 kubelet[2873]: I0912 17:12:41.720939 2873 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:12:41.730328 kubelet[2873]: I0912 17:12:41.730294 2873 policy_none.go:49] "None policy: Start" Sep 12 17:12:41.730547 kubelet[2873]: I0912 17:12:41.730509 2873 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:12:41.730683 kubelet[2873]: I0912 17:12:41.730665 2873 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:12:41.742971 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:12:41.761122 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:12:41.768660 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:12:41.771678 kubelet[2873]: E0912 17:12:41.771615 2873 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-188\" not found" Sep 12 17:12:41.780640 kubelet[2873]: E0912 17:12:41.779728 2873 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:12:41.780640 kubelet[2873]: I0912 17:12:41.780004 2873 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:12:41.780640 kubelet[2873]: I0912 17:12:41.780021 2873 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:12:41.781764 kubelet[2873]: I0912 17:12:41.781735 2873 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:12:41.783847 kubelet[2873]: E0912 17:12:41.783806 2873 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:12:41.784037 kubelet[2873]: E0912 17:12:41.784013 2873 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-188\" not found" Sep 12 17:12:41.834046 systemd[1]: Created slice kubepods-burstable-pod56b286991b7e1fab3fea95a1e26cdca1.slice - libcontainer container kubepods-burstable-pod56b286991b7e1fab3fea95a1e26cdca1.slice. Sep 12 17:12:41.848697 kubelet[2873]: E0912 17:12:41.848281 2873 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-188\" not found" node="ip-172-31-30-188" Sep 12 17:12:41.856201 systemd[1]: Created slice kubepods-burstable-pod20148becf49f68c035148ef19a0ce159.slice - libcontainer container kubepods-burstable-pod20148becf49f68c035148ef19a0ce159.slice. Sep 12 17:12:41.861641 kubelet[2873]: E0912 17:12:41.860554 2873 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-188\" not found" node="ip-172-31-30-188" Sep 12 17:12:41.866936 systemd[1]: Created slice kubepods-burstable-podc10f1e50c5d29bea4666c79025852822.slice - libcontainer container kubepods-burstable-podc10f1e50c5d29bea4666c79025852822.slice. Sep 12 17:12:41.870805 kubelet[2873]: I0912 17:12:41.870030 2873 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56b286991b7e1fab3fea95a1e26cdca1-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-188\" (UID: \"56b286991b7e1fab3fea95a1e26cdca1\") " pod="kube-system/kube-apiserver-ip-172-31-30-188" Sep 12 17:12:41.870805 kubelet[2873]: I0912 17:12:41.870094 2873 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20148becf49f68c035148ef19a0ce159-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-188\" (UID: \"20148becf49f68c035148ef19a0ce159\") " pod="kube-system/kube-controller-manager-ip-172-31-30-188" Sep 12 17:12:41.870805 kubelet[2873]: I0912 17:12:41.870143 2873 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20148becf49f68c035148ef19a0ce159-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-188\" (UID: \"20148becf49f68c035148ef19a0ce159\") " pod="kube-system/kube-controller-manager-ip-172-31-30-188" Sep 12 17:12:41.870805 kubelet[2873]: I0912 17:12:41.870183 2873 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20148becf49f68c035148ef19a0ce159-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-188\" (UID: \"20148becf49f68c035148ef19a0ce159\") " pod="kube-system/kube-controller-manager-ip-172-31-30-188" Sep 12 17:12:41.870805 kubelet[2873]: I0912 17:12:41.870230 2873 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56b286991b7e1fab3fea95a1e26cdca1-ca-certs\") pod \"kube-apiserver-ip-172-31-30-188\" (UID: \"56b286991b7e1fab3fea95a1e26cdca1\") " pod="kube-system/kube-apiserver-ip-172-31-30-188" Sep 12 17:12:41.871143 kubelet[2873]: I0912 17:12:41.870267 2873 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56b286991b7e1fab3fea95a1e26cdca1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-188\" (UID: \"56b286991b7e1fab3fea95a1e26cdca1\") " pod="kube-system/kube-apiserver-ip-172-31-30-188" Sep 12 17:12:41.871143 kubelet[2873]: I0912 17:12:41.870300 2873 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20148becf49f68c035148ef19a0ce159-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-188\" (UID: \"20148becf49f68c035148ef19a0ce159\") " pod="kube-system/kube-controller-manager-ip-172-31-30-188" Sep 12 17:12:41.871143 kubelet[2873]: I0912 17:12:41.870333 2873 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20148becf49f68c035148ef19a0ce159-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-188\" (UID: \"20148becf49f68c035148ef19a0ce159\") " pod="kube-system/kube-controller-manager-ip-172-31-30-188" Sep 12 17:12:41.871143 kubelet[2873]: I0912 17:12:41.870369 2873 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c10f1e50c5d29bea4666c79025852822-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-188\" (UID: \"c10f1e50c5d29bea4666c79025852822\") " pod="kube-system/kube-scheduler-ip-172-31-30-188" Sep 12 17:12:41.872739 kubelet[2873]: E0912 17:12:41.871865 2873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-188?timeout=10s\": dial tcp 172.31.30.188:6443: connect: connection refused" interval="400ms" Sep 12 17:12:41.872739 kubelet[2873]: E0912 17:12:41.872388 2873 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-188\" not found" node="ip-172-31-30-188" Sep 12 17:12:41.882901 kubelet[2873]: I0912 17:12:41.882867 2873 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-188" Sep 12 17:12:41.884050 kubelet[2873]: E0912 17:12:41.884010 2873 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.188:6443/api/v1/nodes\": dial tcp 172.31.30.188:6443: connect: connection refused" node="ip-172-31-30-188" Sep 12 17:12:42.086744 kubelet[2873]: I0912 17:12:42.086574 2873 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-188" Sep 12 17:12:42.087118 kubelet[2873]: E0912 17:12:42.087054 2873 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.188:6443/api/v1/nodes\": dial tcp 172.31.30.188:6443: connect: connection refused" node="ip-172-31-30-188" Sep 12 17:12:42.150155 containerd[2020]: time="2025-09-12T17:12:42.149783452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-188,Uid:56b286991b7e1fab3fea95a1e26cdca1,Namespace:kube-system,Attempt:0,}" Sep 12 17:12:42.162832 containerd[2020]: time="2025-09-12T17:12:42.162733349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-188,Uid:20148becf49f68c035148ef19a0ce159,Namespace:kube-system,Attempt:0,}" Sep 12 17:12:42.176616 containerd[2020]: time="2025-09-12T17:12:42.174536741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-188,Uid:c10f1e50c5d29bea4666c79025852822,Namespace:kube-system,Attempt:0,}" Sep 12 17:12:42.272517 kubelet[2873]: E0912 17:12:42.272451 2873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-188?timeout=10s\": dial tcp 172.31.30.188:6443: connect: connection refused" interval="800ms" Sep 12 17:12:42.460152 kubelet[2873]: E0912 17:12:42.460094 2873 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.30.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:12:42.490187 kubelet[2873]: I0912 17:12:42.489662 2873 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-188" Sep 12 17:12:42.490187 kubelet[2873]: E0912 17:12:42.490099 2873 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.188:6443/api/v1/nodes\": dial tcp 172.31.30.188:6443: connect: connection refused" node="ip-172-31-30-188" Sep 12 17:12:42.706102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3755657985.mount: Deactivated successfully. Sep 12 17:12:42.724782 containerd[2020]: time="2025-09-12T17:12:42.724623415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:12:42.727436 containerd[2020]: time="2025-09-12T17:12:42.727366327Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:12:42.729560 containerd[2020]: time="2025-09-12T17:12:42.729186163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 12 17:12:42.731271 containerd[2020]: time="2025-09-12T17:12:42.731221423Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:12:42.733633 containerd[2020]: time="2025-09-12T17:12:42.733567183Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:12:42.736796 containerd[2020]: time="2025-09-12T17:12:42.736665019Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:12:42.738206 containerd[2020]: time="2025-09-12T17:12:42.738155683Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:12:42.744355 containerd[2020]: time="2025-09-12T17:12:42.744272695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:12:42.748716 containerd[2020]: time="2025-09-12T17:12:42.748406923Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.677442ms" Sep 12 17:12:42.752815 containerd[2020]: time="2025-09-12T17:12:42.752736523Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 589.896662ms" Sep 12 17:12:42.761520 containerd[2020]: time="2025-09-12T17:12:42.761428087Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 611.534031ms" Sep 12 17:12:42.773439 kubelet[2873]: E0912 17:12:42.773351 2873 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.30.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:12:42.985122 containerd[2020]: time="2025-09-12T17:12:42.982677393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:12:42.985122 containerd[2020]: time="2025-09-12T17:12:42.982790517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:12:42.985122 containerd[2020]: time="2025-09-12T17:12:42.982820325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:42.985122 containerd[2020]: time="2025-09-12T17:12:42.982972341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:42.988016 containerd[2020]: time="2025-09-12T17:12:42.986646081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:12:42.988016 containerd[2020]: time="2025-09-12T17:12:42.986759013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:12:42.988016 containerd[2020]: time="2025-09-12T17:12:42.986797533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:42.988416 containerd[2020]: time="2025-09-12T17:12:42.986952357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:42.989212 containerd[2020]: time="2025-09-12T17:12:42.988794561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:12:42.989212 containerd[2020]: time="2025-09-12T17:12:42.988901601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:12:42.989212 containerd[2020]: time="2025-09-12T17:12:42.988938909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:42.989212 containerd[2020]: time="2025-09-12T17:12:42.989091141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:43.030911 systemd[1]: Started cri-containerd-6054f0bfa6d80d334cf2bf1607fa69d1c9dafacc1629ae3a89e4b0188c03230c.scope - libcontainer container 6054f0bfa6d80d334cf2bf1607fa69d1c9dafacc1629ae3a89e4b0188c03230c. Sep 12 17:12:43.048939 systemd[1]: Started cri-containerd-2dfb4a353e7428ef384b5d3422091ca21c98de9e1506d668e702274104388aac.scope - libcontainer container 2dfb4a353e7428ef384b5d3422091ca21c98de9e1506d668e702274104388aac. Sep 12 17:12:43.064221 systemd[1]: Started cri-containerd-0c4c685aef336a70ce033f71dcaa51c6c52566f761a55718027d1fb4c87b77ed.scope - libcontainer container 0c4c685aef336a70ce033f71dcaa51c6c52566f761a55718027d1fb4c87b77ed. Sep 12 17:12:43.071053 kubelet[2873]: E0912 17:12:43.070973 2873 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.30.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-188&limit=500&resourceVersion=0\": dial tcp 172.31.30.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:12:43.074699 kubelet[2873]: E0912 17:12:43.074624 2873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-188?timeout=10s\": dial tcp 172.31.30.188:6443: connect: connection refused" interval="1.6s" Sep 12 17:12:43.175735 kubelet[2873]: E0912 17:12:43.175416 2873 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.188:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.188:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-188.186498417e39217e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-188,UID:ip-172-31-30-188,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-188,},FirstTimestamp:2025-09-12 17:12:41.639305598 +0000 UTC m=+1.468072784,LastTimestamp:2025-09-12 17:12:41.639305598 +0000 UTC m=+1.468072784,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-188,}" Sep 12 17:12:43.177850 containerd[2020]: time="2025-09-12T17:12:43.177182238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-188,Uid:20148becf49f68c035148ef19a0ce159,Namespace:kube-system,Attempt:0,} returns sandbox id \"6054f0bfa6d80d334cf2bf1607fa69d1c9dafacc1629ae3a89e4b0188c03230c\"" Sep 12 17:12:43.191890 containerd[2020]: time="2025-09-12T17:12:43.191724582Z" level=info msg="CreateContainer within sandbox \"6054f0bfa6d80d334cf2bf1607fa69d1c9dafacc1629ae3a89e4b0188c03230c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:12:43.195805 containerd[2020]: time="2025-09-12T17:12:43.195738594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-188,Uid:c10f1e50c5d29bea4666c79025852822,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c4c685aef336a70ce033f71dcaa51c6c52566f761a55718027d1fb4c87b77ed\"" Sep 12 17:12:43.207685 containerd[2020]: time="2025-09-12T17:12:43.207479922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-188,Uid:56b286991b7e1fab3fea95a1e26cdca1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dfb4a353e7428ef384b5d3422091ca21c98de9e1506d668e702274104388aac\"" Sep 12 17:12:43.209929 containerd[2020]: time="2025-09-12T17:12:43.209779182Z" level=info msg="CreateContainer within sandbox \"0c4c685aef336a70ce033f71dcaa51c6c52566f761a55718027d1fb4c87b77ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:12:43.219639 containerd[2020]: time="2025-09-12T17:12:43.219565302Z" level=info msg="CreateContainer within sandbox \"2dfb4a353e7428ef384b5d3422091ca21c98de9e1506d668e702274104388aac\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:12:43.239168 containerd[2020]: time="2025-09-12T17:12:43.238863930Z" level=info msg="CreateContainer within sandbox \"6054f0bfa6d80d334cf2bf1607fa69d1c9dafacc1629ae3a89e4b0188c03230c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a63e4fbf207722a65c5f6c46e6b10ebcaf4d726d688c4f691540d8e0c2a447fc\"" Sep 12 17:12:43.242774 containerd[2020]: time="2025-09-12T17:12:43.240871626Z" level=info msg="StartContainer for \"a63e4fbf207722a65c5f6c46e6b10ebcaf4d726d688c4f691540d8e0c2a447fc\"" Sep 12 17:12:43.262675 kubelet[2873]: E0912 17:12:43.262444 2873 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.30.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:12:43.266927 containerd[2020]: time="2025-09-12T17:12:43.266704914Z" level=info msg="CreateContainer within sandbox \"0c4c685aef336a70ce033f71dcaa51c6c52566f761a55718027d1fb4c87b77ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bea4dbe945e521fee30e3c4ce683868bf4936ddb567ad3efba054a27e27ca887\"" Sep 12 17:12:43.268336 containerd[2020]: time="2025-09-12T17:12:43.268104162Z" level=info msg="StartContainer for \"bea4dbe945e521fee30e3c4ce683868bf4936ddb567ad3efba054a27e27ca887\"" Sep 12 17:12:43.272517 containerd[2020]: time="2025-09-12T17:12:43.272452890Z" level=info msg="CreateContainer within sandbox \"2dfb4a353e7428ef384b5d3422091ca21c98de9e1506d668e702274104388aac\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"79a2df16989ce4c3123c2c70844ba5a3070a4267a469f7996066674836b36ae9\"" Sep 12 17:12:43.274707 containerd[2020]: time="2025-09-12T17:12:43.273381510Z" level=info msg="StartContainer for \"79a2df16989ce4c3123c2c70844ba5a3070a4267a469f7996066674836b36ae9\"" Sep 12 17:12:43.294914 kubelet[2873]: I0912 17:12:43.294865 2873 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-188" Sep 12 17:12:43.295426 kubelet[2873]: E0912 17:12:43.295354 2873 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.188:6443/api/v1/nodes\": dial tcp 172.31.30.188:6443: connect: connection refused" node="ip-172-31-30-188" Sep 12 17:12:43.298928 systemd[1]: Started cri-containerd-a63e4fbf207722a65c5f6c46e6b10ebcaf4d726d688c4f691540d8e0c2a447fc.scope - libcontainer container a63e4fbf207722a65c5f6c46e6b10ebcaf4d726d688c4f691540d8e0c2a447fc. Sep 12 17:12:43.371011 systemd[1]: Started cri-containerd-79a2df16989ce4c3123c2c70844ba5a3070a4267a469f7996066674836b36ae9.scope - libcontainer container 79a2df16989ce4c3123c2c70844ba5a3070a4267a469f7996066674836b36ae9. Sep 12 17:12:43.389001 systemd[1]: Started cri-containerd-bea4dbe945e521fee30e3c4ce683868bf4936ddb567ad3efba054a27e27ca887.scope - libcontainer container bea4dbe945e521fee30e3c4ce683868bf4936ddb567ad3efba054a27e27ca887. Sep 12 17:12:43.435635 containerd[2020]: time="2025-09-12T17:12:43.434331055Z" level=info msg="StartContainer for \"a63e4fbf207722a65c5f6c46e6b10ebcaf4d726d688c4f691540d8e0c2a447fc\" returns successfully" Sep 12 17:12:43.521714 containerd[2020]: time="2025-09-12T17:12:43.521364871Z" level=info msg="StartContainer for \"bea4dbe945e521fee30e3c4ce683868bf4936ddb567ad3efba054a27e27ca887\" returns successfully" Sep 12 17:12:43.521714 containerd[2020]: time="2025-09-12T17:12:43.521381527Z" level=info msg="StartContainer for \"79a2df16989ce4c3123c2c70844ba5a3070a4267a469f7996066674836b36ae9\" returns successfully" Sep 12 17:12:43.665277 kubelet[2873]: E0912 17:12:43.665218 2873 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.30.188:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.188:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:12:43.729145 kubelet[2873]: E0912 17:12:43.728575 2873 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-188\" not found" node="ip-172-31-30-188" Sep 12 17:12:43.735550 kubelet[2873]: E0912 17:12:43.735499 2873 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-188\" not found" node="ip-172-31-30-188" Sep 12 17:12:43.737509 kubelet[2873]: E0912 17:12:43.737458 2873 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-188\" not found" node="ip-172-31-30-188" Sep 12 17:12:44.739036 kubelet[2873]: E0912 17:12:44.738986 2873 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-188\" not found" node="ip-172-31-30-188" Sep 12 17:12:44.740910 kubelet[2873]: E0912 17:12:44.739576 2873 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-188\" not found" node="ip-172-31-30-188" Sep 12 17:12:44.899051 kubelet[2873]: I0912 17:12:44.898668 2873 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-188" Sep 12 17:12:45.050714 update_engine[1995]: I20250912 17:12:45.049634 1995 update_attempter.cc:509] Updating boot flags... Sep 12 17:12:45.194617 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3163) Sep 12 17:12:45.617722 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3166) Sep 12 17:12:48.494634 kubelet[2873]: E0912 17:12:48.493968 2873 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-188\" not found" node="ip-172-31-30-188" Sep 12 17:12:48.588838 kubelet[2873]: I0912 17:12:48.588465 2873 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-188" Sep 12 17:12:48.643677 kubelet[2873]: I0912 17:12:48.643332 2873 apiserver.go:52] "Watching apiserver" Sep 12 17:12:48.666630 kubelet[2873]: I0912 17:12:48.664049 2873 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-188" Sep 12 17:12:48.669939 kubelet[2873]: I0912 17:12:48.669884 2873 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:12:48.680893 kubelet[2873]: E0912 17:12:48.680833 2873 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-188\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-30-188" Sep 12 17:12:48.680893 kubelet[2873]: I0912 17:12:48.680884 2873 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-188" Sep 12 17:12:48.695615 kubelet[2873]: E0912 17:12:48.695018 2873 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-188\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-188" Sep 12 17:12:48.695615 kubelet[2873]: I0912 17:12:48.695077 2873 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-188" Sep 12 17:12:48.699551 kubelet[2873]: E0912 17:12:48.699465 2873 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-30-188\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-30-188" Sep 12 17:12:50.669768 systemd[1]: Reloading requested from client PID 3336 ('systemctl') (unit session-9.scope)... Sep 12 17:12:50.669792 systemd[1]: Reloading... Sep 12 17:12:50.852703 zram_generator::config[3376]: No configuration found. Sep 12 17:12:51.087476 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:12:51.297003 systemd[1]: Reloading finished in 626 ms. Sep 12 17:12:51.386656 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:12:51.404228 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:12:51.406662 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:51.406745 systemd[1]: kubelet.service: Consumed 2.199s CPU time, 129.7M memory peak, 0B memory swap peak. Sep 12 17:12:51.415227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:12:51.754640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:51.775459 (kubelet)[3436]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:12:51.887730 kubelet[3436]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:12:51.887730 kubelet[3436]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:12:51.887730 kubelet[3436]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:12:51.888336 kubelet[3436]: I0912 17:12:51.887899 3436 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:12:51.904667 kubelet[3436]: I0912 17:12:51.903586 3436 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:12:51.904667 kubelet[3436]: I0912 17:12:51.903653 3436 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:12:51.904667 kubelet[3436]: I0912 17:12:51.904017 3436 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:12:51.906685 kubelet[3436]: I0912 17:12:51.906648 3436 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 17:12:51.911142 kubelet[3436]: I0912 17:12:51.911083 3436 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:12:51.922812 kubelet[3436]: E0912 17:12:51.922755 3436 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:12:51.923014 kubelet[3436]: I0912 17:12:51.922990 3436 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:12:51.928181 kubelet[3436]: I0912 17:12:51.928118 3436 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:12:51.928756 kubelet[3436]: I0912 17:12:51.928709 3436 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:12:51.929111 kubelet[3436]: I0912 17:12:51.928864 3436 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-188","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:12:51.929347 kubelet[3436]: I0912 17:12:51.929326 3436 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:12:51.929466 kubelet[3436]: I0912 17:12:51.929446 3436 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:12:51.929683 kubelet[3436]: I0912 17:12:51.929663 3436 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:12:51.930038 kubelet[3436]: I0912 17:12:51.930019 3436 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:12:51.931068 kubelet[3436]: I0912 17:12:51.931033 3436 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:12:51.933681 kubelet[3436]: I0912 17:12:51.931253 3436 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:12:51.933681 kubelet[3436]: I0912 17:12:51.931289 3436 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:12:51.936298 kubelet[3436]: I0912 17:12:51.936214 3436 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:12:51.941636 kubelet[3436]: I0912 17:12:51.939475 3436 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:12:51.946638 kubelet[3436]: I0912 17:12:51.945224 3436 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:12:51.946638 kubelet[3436]: I0912 17:12:51.945293 3436 server.go:1289] "Started kubelet" Sep 12 17:12:51.953749 kubelet[3436]: I0912 17:12:51.953703 3436 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:12:51.961536 kubelet[3436]: I0912 17:12:51.961465 3436 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:12:51.964066 kubelet[3436]: I0912 17:12:51.964018 3436 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:12:51.977402 kubelet[3436]: I0912 17:12:51.977313 3436 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:12:51.977709 kubelet[3436]: I0912 17:12:51.977675 3436 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:12:51.978397 kubelet[3436]: I0912 17:12:51.978359 3436 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:12:51.986327 kubelet[3436]: I0912 17:12:51.986269 3436 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:12:51.986532 kubelet[3436]: E0912 17:12:51.986492 3436 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-188\" not found" Sep 12 17:12:51.992196 kubelet[3436]: I0912 17:12:51.992128 3436 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:12:51.992404 kubelet[3436]: I0912 17:12:51.992364 3436 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:12:52.006288 kubelet[3436]: I0912 17:12:52.006155 3436 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:12:52.008117 kubelet[3436]: I0912 17:12:52.006674 3436 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:12:52.024180 kubelet[3436]: I0912 17:12:52.024135 3436 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:12:52.025246 kubelet[3436]: I0912 17:12:52.025213 3436 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:12:52.030682 kubelet[3436]: I0912 17:12:52.030583 3436 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:12:52.030682 kubelet[3436]: I0912 17:12:52.030654 3436 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:12:52.030682 kubelet[3436]: I0912 17:12:52.030689 3436 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:12:52.030933 kubelet[3436]: I0912 17:12:52.030704 3436 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:12:52.030933 kubelet[3436]: E0912 17:12:52.030773 3436 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:12:52.131324 kubelet[3436]: E0912 17:12:52.131258 3436 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:12:52.160740 kubelet[3436]: I0912 17:12:52.159287 3436 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:12:52.160740 kubelet[3436]: I0912 17:12:52.159315 3436 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:12:52.160740 kubelet[3436]: I0912 17:12:52.159350 3436 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:12:52.160740 kubelet[3436]: I0912 17:12:52.159568 3436 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:12:52.160740 kubelet[3436]: I0912 17:12:52.159588 3436 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:12:52.160740 kubelet[3436]: I0912 17:12:52.159683 3436 policy_none.go:49] "None policy: Start" Sep 12 17:12:52.160740 kubelet[3436]: I0912 17:12:52.159704 3436 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:12:52.160740 kubelet[3436]: I0912 17:12:52.159728 3436 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:12:52.160740 kubelet[3436]: I0912 17:12:52.159917 3436 state_mem.go:75] "Updated machine memory state" Sep 12 17:12:52.173548 kubelet[3436]: E0912 17:12:52.173512 3436 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:12:52.176791 kubelet[3436]: I0912 17:12:52.176080 3436 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:12:52.176791 kubelet[3436]: I0912 17:12:52.176119 3436 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:12:52.179550 kubelet[3436]: I0912 17:12:52.179436 3436 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:12:52.188656 kubelet[3436]: E0912 17:12:52.188523 3436 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:12:52.299769 kubelet[3436]: I0912 17:12:52.299068 3436 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-188" Sep 12 17:12:52.322645 kubelet[3436]: I0912 17:12:52.321933 3436 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-30-188" Sep 12 17:12:52.322645 kubelet[3436]: I0912 17:12:52.322071 3436 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-188" Sep 12 17:12:52.337109 kubelet[3436]: I0912 17:12:52.336770 3436 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-188" Sep 12 17:12:52.337109 kubelet[3436]: I0912 17:12:52.336931 3436 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-188" Sep 12 17:12:52.337844 kubelet[3436]: I0912 17:12:52.336792 3436 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-188" Sep 12 17:12:52.395697 kubelet[3436]: I0912 17:12:52.395620 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20148becf49f68c035148ef19a0ce159-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-188\" (UID: \"20148becf49f68c035148ef19a0ce159\") " pod="kube-system/kube-controller-manager-ip-172-31-30-188" Sep 12 17:12:52.395697 kubelet[3436]: I0912 17:12:52.395698 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20148becf49f68c035148ef19a0ce159-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-188\" (UID: \"20148becf49f68c035148ef19a0ce159\") " pod="kube-system/kube-controller-manager-ip-172-31-30-188" Sep 12 17:12:52.396708 kubelet[3436]: I0912 17:12:52.395741 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20148becf49f68c035148ef19a0ce159-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-188\" (UID: \"20148becf49f68c035148ef19a0ce159\") " pod="kube-system/kube-controller-manager-ip-172-31-30-188" Sep 12 17:12:52.396708 kubelet[3436]: I0912 17:12:52.395787 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56b286991b7e1fab3fea95a1e26cdca1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-188\" (UID: \"56b286991b7e1fab3fea95a1e26cdca1\") " pod="kube-system/kube-apiserver-ip-172-31-30-188" Sep 12 17:12:52.396708 kubelet[3436]: I0912 17:12:52.395846 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20148becf49f68c035148ef19a0ce159-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-188\" (UID: \"20148becf49f68c035148ef19a0ce159\") " pod="kube-system/kube-controller-manager-ip-172-31-30-188" Sep 12 17:12:52.396708 kubelet[3436]: I0912 17:12:52.395882 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20148becf49f68c035148ef19a0ce159-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-188\" (UID: \"20148becf49f68c035148ef19a0ce159\") " pod="kube-system/kube-controller-manager-ip-172-31-30-188" Sep 12 17:12:52.396708 kubelet[3436]: I0912 17:12:52.395917 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c10f1e50c5d29bea4666c79025852822-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-188\" (UID: \"c10f1e50c5d29bea4666c79025852822\") " pod="kube-system/kube-scheduler-ip-172-31-30-188" Sep 12 17:12:52.396991 kubelet[3436]: I0912 17:12:52.395954 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56b286991b7e1fab3fea95a1e26cdca1-ca-certs\") pod \"kube-apiserver-ip-172-31-30-188\" (UID: \"56b286991b7e1fab3fea95a1e26cdca1\") " pod="kube-system/kube-apiserver-ip-172-31-30-188" Sep 12 17:12:52.396991 kubelet[3436]: I0912 17:12:52.395993 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56b286991b7e1fab3fea95a1e26cdca1-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-188\" (UID: \"56b286991b7e1fab3fea95a1e26cdca1\") " pod="kube-system/kube-apiserver-ip-172-31-30-188" Sep 12 17:12:52.933562 kubelet[3436]: I0912 17:12:52.933497 3436 apiserver.go:52] "Watching apiserver" Sep 12 17:12:52.993628 kubelet[3436]: I0912 17:12:52.992509 3436 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:12:53.159683 kubelet[3436]: I0912 17:12:53.158927 3436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-188" podStartSLOduration=1.158907087 podStartE2EDuration="1.158907087s" podCreationTimestamp="2025-09-12 17:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:12:53.158584935 +0000 UTC m=+1.371668348" watchObservedRunningTime="2025-09-12 17:12:53.158907087 +0000 UTC m=+1.371990500" Sep 12 17:12:53.159683 kubelet[3436]: I0912 17:12:53.159104 3436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-188" podStartSLOduration=1.159095583 podStartE2EDuration="1.159095583s" podCreationTimestamp="2025-09-12 17:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:12:53.143384139 +0000 UTC m=+1.356467540" watchObservedRunningTime="2025-09-12 17:12:53.159095583 +0000 UTC m=+1.372178996" Sep 12 17:12:53.203984 kubelet[3436]: I0912 17:12:53.203618 3436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-188" podStartSLOduration=1.203575863 podStartE2EDuration="1.203575863s" podCreationTimestamp="2025-09-12 17:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:12:53.179659611 +0000 UTC m=+1.392743048" watchObservedRunningTime="2025-09-12 17:12:53.203575863 +0000 UTC m=+1.416659288" Sep 12 17:12:57.156624 kubelet[3436]: I0912 17:12:57.154836 3436 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:12:57.160214 containerd[2020]: time="2025-09-12T17:12:57.158999347Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:12:57.163981 kubelet[3436]: I0912 17:12:57.159490 3436 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:12:57.165704 systemd[1]: Created slice kubepods-besteffort-pod36349660_eec7_4f6a_a9b4_e3b0f5385690.slice - libcontainer container kubepods-besteffort-pod36349660_eec7_4f6a_a9b4_e3b0f5385690.slice. Sep 12 17:12:57.227440 kubelet[3436]: I0912 17:12:57.227293 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36349660-eec7-4f6a-a9b4-e3b0f5385690-xtables-lock\") pod \"kube-proxy-sgfl5\" (UID: \"36349660-eec7-4f6a-a9b4-e3b0f5385690\") " pod="kube-system/kube-proxy-sgfl5" Sep 12 17:12:57.227440 kubelet[3436]: I0912 17:12:57.227421 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpkdc\" (UniqueName: \"kubernetes.io/projected/36349660-eec7-4f6a-a9b4-e3b0f5385690-kube-api-access-dpkdc\") pod \"kube-proxy-sgfl5\" (UID: \"36349660-eec7-4f6a-a9b4-e3b0f5385690\") " pod="kube-system/kube-proxy-sgfl5" Sep 12 17:12:57.227766 kubelet[3436]: I0912 17:12:57.227530 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36349660-eec7-4f6a-a9b4-e3b0f5385690-kube-proxy\") pod \"kube-proxy-sgfl5\" (UID: \"36349660-eec7-4f6a-a9b4-e3b0f5385690\") " pod="kube-system/kube-proxy-sgfl5" Sep 12 17:12:57.227766 kubelet[3436]: I0912 17:12:57.227609 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36349660-eec7-4f6a-a9b4-e3b0f5385690-lib-modules\") pod \"kube-proxy-sgfl5\" (UID: \"36349660-eec7-4f6a-a9b4-e3b0f5385690\") " pod="kube-system/kube-proxy-sgfl5" Sep 12 17:12:57.342326 kubelet[3436]: E0912 17:12:57.342259 3436 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 12 17:12:57.342326 kubelet[3436]: E0912 17:12:57.342309 3436 projected.go:194] Error preparing data for projected volume kube-api-access-dpkdc for pod kube-system/kube-proxy-sgfl5: configmap "kube-root-ca.crt" not found Sep 12 17:12:57.344062 kubelet[3436]: E0912 17:12:57.342430 3436 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36349660-eec7-4f6a-a9b4-e3b0f5385690-kube-api-access-dpkdc podName:36349660-eec7-4f6a-a9b4-e3b0f5385690 nodeName:}" failed. No retries permitted until 2025-09-12 17:12:57.84239392 +0000 UTC m=+6.055477321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dpkdc" (UniqueName: "kubernetes.io/projected/36349660-eec7-4f6a-a9b4-e3b0f5385690-kube-api-access-dpkdc") pod "kube-proxy-sgfl5" (UID: "36349660-eec7-4f6a-a9b4-e3b0f5385690") : configmap "kube-root-ca.crt" not found Sep 12 17:12:57.933263 kubelet[3436]: E0912 17:12:57.933195 3436 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 12 17:12:57.933263 kubelet[3436]: E0912 17:12:57.933244 3436 projected.go:194] Error preparing data for projected volume kube-api-access-dpkdc for pod kube-system/kube-proxy-sgfl5: configmap "kube-root-ca.crt" not found Sep 12 17:12:57.933561 kubelet[3436]: E0912 17:12:57.933336 3436 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36349660-eec7-4f6a-a9b4-e3b0f5385690-kube-api-access-dpkdc podName:36349660-eec7-4f6a-a9b4-e3b0f5385690 nodeName:}" failed. No retries permitted until 2025-09-12 17:12:58.933309863 +0000 UTC m=+7.146393264 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dpkdc" (UniqueName: "kubernetes.io/projected/36349660-eec7-4f6a-a9b4-e3b0f5385690-kube-api-access-dpkdc") pod "kube-proxy-sgfl5" (UID: "36349660-eec7-4f6a-a9b4-e3b0f5385690") : configmap "kube-root-ca.crt" not found Sep 12 17:12:58.349844 systemd[1]: Created slice kubepods-besteffort-pod57405433_a068_4f49_bd0e_10e4ab3fcdfa.slice - libcontainer container kubepods-besteffort-pod57405433_a068_4f49_bd0e_10e4ab3fcdfa.slice. Sep 12 17:12:58.436280 kubelet[3436]: I0912 17:12:58.436111 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/57405433-a068-4f49-bd0e-10e4ab3fcdfa-var-lib-calico\") pod \"tigera-operator-755d956888-hrlb2\" (UID: \"57405433-a068-4f49-bd0e-10e4ab3fcdfa\") " pod="tigera-operator/tigera-operator-755d956888-hrlb2" Sep 12 17:12:58.436280 kubelet[3436]: I0912 17:12:58.436177 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88zfx\" (UniqueName: \"kubernetes.io/projected/57405433-a068-4f49-bd0e-10e4ab3fcdfa-kube-api-access-88zfx\") pod \"tigera-operator-755d956888-hrlb2\" (UID: \"57405433-a068-4f49-bd0e-10e4ab3fcdfa\") " pod="tigera-operator/tigera-operator-755d956888-hrlb2" Sep 12 17:12:58.659197 containerd[2020]: time="2025-09-12T17:12:58.659035558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-hrlb2,Uid:57405433-a068-4f49-bd0e-10e4ab3fcdfa,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:12:58.709771 containerd[2020]: time="2025-09-12T17:12:58.709320443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:12:58.709771 containerd[2020]: time="2025-09-12T17:12:58.709404023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:12:58.709771 containerd[2020]: time="2025-09-12T17:12:58.709428815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:58.709771 containerd[2020]: time="2025-09-12T17:12:58.709564703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:58.753206 systemd[1]: Started cri-containerd-4b668878ef17705e3764d6f819c2af75887156685a9e226baf40bac0f4fb4705.scope - libcontainer container 4b668878ef17705e3764d6f819c2af75887156685a9e226baf40bac0f4fb4705. Sep 12 17:12:58.814015 containerd[2020]: time="2025-09-12T17:12:58.813893579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-hrlb2,Uid:57405433-a068-4f49-bd0e-10e4ab3fcdfa,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4b668878ef17705e3764d6f819c2af75887156685a9e226baf40bac0f4fb4705\"" Sep 12 17:12:58.817409 containerd[2020]: time="2025-09-12T17:12:58.817152827Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:12:58.981712 containerd[2020]: time="2025-09-12T17:12:58.981653016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sgfl5,Uid:36349660-eec7-4f6a-a9b4-e3b0f5385690,Namespace:kube-system,Attempt:0,}" Sep 12 17:12:59.026293 containerd[2020]: time="2025-09-12T17:12:59.025998044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:12:59.026293 containerd[2020]: time="2025-09-12T17:12:59.026162600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:12:59.026293 containerd[2020]: time="2025-09-12T17:12:59.026204180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:59.027139 containerd[2020]: time="2025-09-12T17:12:59.027015728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:59.070905 systemd[1]: Started cri-containerd-ca156f381194d4fcf7da7a4559ee8c2332f6c58c961b16a49ff39ff87cdf4082.scope - libcontainer container ca156f381194d4fcf7da7a4559ee8c2332f6c58c961b16a49ff39ff87cdf4082. Sep 12 17:12:59.118157 containerd[2020]: time="2025-09-12T17:12:59.117924525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sgfl5,Uid:36349660-eec7-4f6a-a9b4-e3b0f5385690,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca156f381194d4fcf7da7a4559ee8c2332f6c58c961b16a49ff39ff87cdf4082\"" Sep 12 17:12:59.132751 containerd[2020]: time="2025-09-12T17:12:59.132508977Z" level=info msg="CreateContainer within sandbox \"ca156f381194d4fcf7da7a4559ee8c2332f6c58c961b16a49ff39ff87cdf4082\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:12:59.167084 containerd[2020]: time="2025-09-12T17:12:59.167025585Z" level=info msg="CreateContainer within sandbox \"ca156f381194d4fcf7da7a4559ee8c2332f6c58c961b16a49ff39ff87cdf4082\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"72151fc5d8145916ab0bad5c1df5eadaf39532a45bf0b0eda9c8069860e2ecb3\"" Sep 12 17:12:59.170634 containerd[2020]: time="2025-09-12T17:12:59.168397317Z" level=info msg="StartContainer for \"72151fc5d8145916ab0bad5c1df5eadaf39532a45bf0b0eda9c8069860e2ecb3\"" Sep 12 17:12:59.211924 systemd[1]: Started cri-containerd-72151fc5d8145916ab0bad5c1df5eadaf39532a45bf0b0eda9c8069860e2ecb3.scope - libcontainer container 72151fc5d8145916ab0bad5c1df5eadaf39532a45bf0b0eda9c8069860e2ecb3. Sep 12 17:12:59.274236 containerd[2020]: time="2025-09-12T17:12:59.273557721Z" level=info msg="StartContainer for \"72151fc5d8145916ab0bad5c1df5eadaf39532a45bf0b0eda9c8069860e2ecb3\" returns successfully" Sep 12 17:13:00.159204 kubelet[3436]: I0912 17:13:00.158557 3436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sgfl5" podStartSLOduration=3.158534038 podStartE2EDuration="3.158534038s" podCreationTimestamp="2025-09-12 17:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:13:00.15757627 +0000 UTC m=+8.370659671" watchObservedRunningTime="2025-09-12 17:13:00.158534038 +0000 UTC m=+8.371617511" Sep 12 17:13:00.186284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327893827.mount: Deactivated successfully. Sep 12 17:13:00.918895 containerd[2020]: time="2025-09-12T17:13:00.918815834Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:00.921923 containerd[2020]: time="2025-09-12T17:13:00.921516470Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 17:13:00.924678 containerd[2020]: time="2025-09-12T17:13:00.924085850Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:00.929454 containerd[2020]: time="2025-09-12T17:13:00.929402522Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:00.931066 containerd[2020]: time="2025-09-12T17:13:00.931012766Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.113601627s" Sep 12 17:13:00.931212 containerd[2020]: time="2025-09-12T17:13:00.931181762Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 17:13:00.940982 containerd[2020]: time="2025-09-12T17:13:00.940908254Z" level=info msg="CreateContainer within sandbox \"4b668878ef17705e3764d6f819c2af75887156685a9e226baf40bac0f4fb4705\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:13:00.965850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261302238.mount: Deactivated successfully. Sep 12 17:13:00.970651 containerd[2020]: time="2025-09-12T17:13:00.970555718Z" level=info msg="CreateContainer within sandbox \"4b668878ef17705e3764d6f819c2af75887156685a9e226baf40bac0f4fb4705\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2919f70d6ea425ab4d3944115a3191ea07b88581c978f5948a45338d3b7abdc9\"" Sep 12 17:13:00.971575 containerd[2020]: time="2025-09-12T17:13:00.971523494Z" level=info msg="StartContainer for \"2919f70d6ea425ab4d3944115a3191ea07b88581c978f5948a45338d3b7abdc9\"" Sep 12 17:13:01.025828 systemd[1]: Started cri-containerd-2919f70d6ea425ab4d3944115a3191ea07b88581c978f5948a45338d3b7abdc9.scope - libcontainer container 2919f70d6ea425ab4d3944115a3191ea07b88581c978f5948a45338d3b7abdc9. Sep 12 17:13:01.083120 containerd[2020]: time="2025-09-12T17:13:01.082951738Z" level=info msg="StartContainer for \"2919f70d6ea425ab4d3944115a3191ea07b88581c978f5948a45338d3b7abdc9\" returns successfully" Sep 12 17:13:09.894891 sudo[2349]: pam_unix(sudo:session): session closed for user root Sep 12 17:13:09.926989 sshd[2346]: pam_unix(sshd:session): session closed for user core Sep 12 17:13:09.937562 systemd[1]: sshd@8-172.31.30.188:22-147.75.109.163:34788.service: Deactivated successfully. Sep 12 17:13:09.947884 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:13:09.950661 systemd[1]: session-9.scope: Consumed 11.685s CPU time, 153.5M memory peak, 0B memory swap peak. Sep 12 17:13:09.958211 systemd-logind[1992]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:13:09.963664 systemd-logind[1992]: Removed session 9. Sep 12 17:13:24.791020 kubelet[3436]: I0912 17:13:24.790909 3436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-hrlb2" podStartSLOduration=24.674669613 podStartE2EDuration="26.790887576s" podCreationTimestamp="2025-09-12 17:12:58 +0000 UTC" firstStartedPulling="2025-09-12 17:12:58.816467435 +0000 UTC m=+7.029550848" lastFinishedPulling="2025-09-12 17:13:00.93268541 +0000 UTC m=+9.145768811" observedRunningTime="2025-09-12 17:13:01.182761715 +0000 UTC m=+9.395845152" watchObservedRunningTime="2025-09-12 17:13:24.790887576 +0000 UTC m=+33.003970989" Sep 12 17:13:24.798915 kubelet[3436]: E0912 17:13:24.798834 3436 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ip-172-31-30-188\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-30-188' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"typha-certs\"" type="*v1.Secret" Sep 12 17:13:24.818149 systemd[1]: Created slice kubepods-besteffort-pod332ed956_844e_45bd_81f9_04a228aa8f33.slice - libcontainer container kubepods-besteffort-pod332ed956_844e_45bd_81f9_04a228aa8f33.slice. Sep 12 17:13:24.923738 kubelet[3436]: I0912 17:13:24.923276 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/332ed956-844e-45bd-81f9-04a228aa8f33-typha-certs\") pod \"calico-typha-7f5888fc7d-8gnw8\" (UID: \"332ed956-844e-45bd-81f9-04a228aa8f33\") " pod="calico-system/calico-typha-7f5888fc7d-8gnw8" Sep 12 17:13:24.923738 kubelet[3436]: I0912 17:13:24.923405 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/332ed956-844e-45bd-81f9-04a228aa8f33-tigera-ca-bundle\") pod \"calico-typha-7f5888fc7d-8gnw8\" (UID: \"332ed956-844e-45bd-81f9-04a228aa8f33\") " pod="calico-system/calico-typha-7f5888fc7d-8gnw8" Sep 12 17:13:24.923738 kubelet[3436]: I0912 17:13:24.923454 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxkcl\" (UniqueName: \"kubernetes.io/projected/332ed956-844e-45bd-81f9-04a228aa8f33-kube-api-access-wxkcl\") pod \"calico-typha-7f5888fc7d-8gnw8\" (UID: \"332ed956-844e-45bd-81f9-04a228aa8f33\") " pod="calico-system/calico-typha-7f5888fc7d-8gnw8" Sep 12 17:13:25.279228 systemd[1]: Created slice kubepods-besteffort-pod6bf54107_d9fe_4f98_9d98_6db8ba46a29b.slice - libcontainer container kubepods-besteffort-pod6bf54107_d9fe_4f98_9d98_6db8ba46a29b.slice. Sep 12 17:13:25.326665 kubelet[3436]: I0912 17:13:25.326425 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6bf54107-d9fe-4f98-9d98-6db8ba46a29b-policysync\") pod \"calico-node-kkr6n\" (UID: \"6bf54107-d9fe-4f98-9d98-6db8ba46a29b\") " pod="calico-system/calico-node-kkr6n" Sep 12 17:13:25.326665 kubelet[3436]: I0912 17:13:25.326519 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bf54107-d9fe-4f98-9d98-6db8ba46a29b-tigera-ca-bundle\") pod \"calico-node-kkr6n\" (UID: \"6bf54107-d9fe-4f98-9d98-6db8ba46a29b\") " pod="calico-system/calico-node-kkr6n" Sep 12 17:13:25.326665 kubelet[3436]: I0912 17:13:25.326568 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6bf54107-d9fe-4f98-9d98-6db8ba46a29b-cni-log-dir\") pod \"calico-node-kkr6n\" (UID: \"6bf54107-d9fe-4f98-9d98-6db8ba46a29b\") " pod="calico-system/calico-node-kkr6n" Sep 12 17:13:25.326665 kubelet[3436]: I0912 17:13:25.326627 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6bf54107-d9fe-4f98-9d98-6db8ba46a29b-cni-net-dir\") pod \"calico-node-kkr6n\" (UID: \"6bf54107-d9fe-4f98-9d98-6db8ba46a29b\") " pod="calico-system/calico-node-kkr6n" Sep 12 17:13:25.328615 kubelet[3436]: I0912 17:13:25.327054 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9ct8\" (UniqueName: \"kubernetes.io/projected/6bf54107-d9fe-4f98-9d98-6db8ba46a29b-kube-api-access-t9ct8\") pod \"calico-node-kkr6n\" (UID: \"6bf54107-d9fe-4f98-9d98-6db8ba46a29b\") " pod="calico-system/calico-node-kkr6n" Sep 12 17:13:25.328615 kubelet[3436]: I0912 17:13:25.327143 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6bf54107-d9fe-4f98-9d98-6db8ba46a29b-cni-bin-dir\") pod \"calico-node-kkr6n\" (UID: \"6bf54107-d9fe-4f98-9d98-6db8ba46a29b\") " pod="calico-system/calico-node-kkr6n" Sep 12 17:13:25.328615 kubelet[3436]: I0912 17:13:25.327182 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6bf54107-d9fe-4f98-9d98-6db8ba46a29b-node-certs\") pod \"calico-node-kkr6n\" (UID: \"6bf54107-d9fe-4f98-9d98-6db8ba46a29b\") " pod="calico-system/calico-node-kkr6n" Sep 12 17:13:25.328615 kubelet[3436]: I0912 17:13:25.327222 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bf54107-d9fe-4f98-9d98-6db8ba46a29b-lib-modules\") pod \"calico-node-kkr6n\" (UID: \"6bf54107-d9fe-4f98-9d98-6db8ba46a29b\") " pod="calico-system/calico-node-kkr6n" Sep 12 17:13:25.328615 kubelet[3436]: I0912 17:13:25.327272 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bf54107-d9fe-4f98-9d98-6db8ba46a29b-xtables-lock\") pod \"calico-node-kkr6n\" (UID: \"6bf54107-d9fe-4f98-9d98-6db8ba46a29b\") " pod="calico-system/calico-node-kkr6n" Sep 12 17:13:25.328928 kubelet[3436]: I0912 17:13:25.327329 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6bf54107-d9fe-4f98-9d98-6db8ba46a29b-var-lib-calico\") pod \"calico-node-kkr6n\" (UID: \"6bf54107-d9fe-4f98-9d98-6db8ba46a29b\") " pod="calico-system/calico-node-kkr6n" Sep 12 17:13:25.328928 kubelet[3436]: I0912 17:13:25.327365 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6bf54107-d9fe-4f98-9d98-6db8ba46a29b-var-run-calico\") pod \"calico-node-kkr6n\" (UID: \"6bf54107-d9fe-4f98-9d98-6db8ba46a29b\") " pod="calico-system/calico-node-kkr6n" Sep 12 17:13:25.328928 kubelet[3436]: I0912 17:13:25.327400 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6bf54107-d9fe-4f98-9d98-6db8ba46a29b-flexvol-driver-host\") pod \"calico-node-kkr6n\" (UID: \"6bf54107-d9fe-4f98-9d98-6db8ba46a29b\") " pod="calico-system/calico-node-kkr6n" Sep 12 17:13:25.431426 kubelet[3436]: E0912 17:13:25.431111 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.431426 kubelet[3436]: W0912 17:13:25.431154 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.431426 kubelet[3436]: E0912 17:13:25.431207 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.433620 kubelet[3436]: E0912 17:13:25.433097 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.433620 kubelet[3436]: W0912 17:13:25.433159 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.433620 kubelet[3436]: E0912 17:13:25.433194 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.434070 kubelet[3436]: E0912 17:13:25.433784 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.434070 kubelet[3436]: W0912 17:13:25.433814 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.434070 kubelet[3436]: E0912 17:13:25.433846 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.434939 kubelet[3436]: E0912 17:13:25.434281 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.434939 kubelet[3436]: W0912 17:13:25.434321 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.434939 kubelet[3436]: E0912 17:13:25.434358 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.435834 kubelet[3436]: E0912 17:13:25.435569 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.435834 kubelet[3436]: W0912 17:13:25.435628 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.435834 kubelet[3436]: E0912 17:13:25.435663 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.436787 kubelet[3436]: E0912 17:13:25.436184 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.436787 kubelet[3436]: W0912 17:13:25.436209 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.436787 kubelet[3436]: E0912 17:13:25.436239 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.438376 kubelet[3436]: E0912 17:13:25.437805 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.438376 kubelet[3436]: W0912 17:13:25.437848 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.438376 kubelet[3436]: E0912 17:13:25.437883 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.438376 kubelet[3436]: E0912 17:13:25.438356 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.438376 kubelet[3436]: W0912 17:13:25.438381 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.438845 kubelet[3436]: E0912 17:13:25.438410 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.439516 kubelet[3436]: E0912 17:13:25.438973 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.439516 kubelet[3436]: W0912 17:13:25.439012 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.439516 kubelet[3436]: E0912 17:13:25.439046 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.441057 kubelet[3436]: E0912 17:13:25.440801 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.441057 kubelet[3436]: W0912 17:13:25.440841 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.441057 kubelet[3436]: E0912 17:13:25.440874 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.441369 kubelet[3436]: E0912 17:13:25.441351 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.441430 kubelet[3436]: W0912 17:13:25.441376 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.441430 kubelet[3436]: E0912 17:13:25.441408 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.441977 kubelet[3436]: E0912 17:13:25.441922 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.441977 kubelet[3436]: W0912 17:13:25.441957 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.442320 kubelet[3436]: E0912 17:13:25.441986 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.444008 kubelet[3436]: E0912 17:13:25.442552 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.444008 kubelet[3436]: W0912 17:13:25.442587 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.444008 kubelet[3436]: E0912 17:13:25.442686 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.448784 kubelet[3436]: E0912 17:13:25.448715 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.448784 kubelet[3436]: W0912 17:13:25.448764 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.449001 kubelet[3436]: E0912 17:13:25.448804 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.449921 kubelet[3436]: E0912 17:13:25.449869 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.449921 kubelet[3436]: W0912 17:13:25.449911 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.450148 kubelet[3436]: E0912 17:13:25.449944 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.465302 kubelet[3436]: E0912 17:13:25.465136 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.465302 kubelet[3436]: W0912 17:13:25.465175 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.465302 kubelet[3436]: E0912 17:13:25.465209 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.474939 kubelet[3436]: E0912 17:13:25.474254 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.474939 kubelet[3436]: W0912 17:13:25.474318 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.474939 kubelet[3436]: E0912 17:13:25.474369 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.559390 kubelet[3436]: E0912 17:13:25.557467 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cg78j" podUID="5a763ccd-0bce-40dd-95cb-421d108768a3" Sep 12 17:13:25.588348 containerd[2020]: time="2025-09-12T17:13:25.588267120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kkr6n,Uid:6bf54107-d9fe-4f98-9d98-6db8ba46a29b,Namespace:calico-system,Attempt:0,}" Sep 12 17:13:25.600176 kubelet[3436]: E0912 17:13:25.599850 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.600176 kubelet[3436]: W0912 17:13:25.599919 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.600176 kubelet[3436]: E0912 17:13:25.599982 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.601208 kubelet[3436]: E0912 17:13:25.601144 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.601357 kubelet[3436]: W0912 17:13:25.601185 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.601357 kubelet[3436]: E0912 17:13:25.601265 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.603876 kubelet[3436]: E0912 17:13:25.603819 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.603876 kubelet[3436]: W0912 17:13:25.603862 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.604444 kubelet[3436]: E0912 17:13:25.603904 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.605457 kubelet[3436]: E0912 17:13:25.605392 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.605672 kubelet[3436]: W0912 17:13:25.605493 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.605672 kubelet[3436]: E0912 17:13:25.605538 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.607412 kubelet[3436]: E0912 17:13:25.607351 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.607412 kubelet[3436]: W0912 17:13:25.607399 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.607782 kubelet[3436]: E0912 17:13:25.607438 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.608353 kubelet[3436]: E0912 17:13:25.608295 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.608474 kubelet[3436]: W0912 17:13:25.608381 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.610733 kubelet[3436]: E0912 17:13:25.608708 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.613938 kubelet[3436]: E0912 17:13:25.613652 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.613938 kubelet[3436]: W0912 17:13:25.613694 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.613938 kubelet[3436]: E0912 17:13:25.613728 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.618672 kubelet[3436]: E0912 17:13:25.615165 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.618672 kubelet[3436]: W0912 17:13:25.615207 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.618672 kubelet[3436]: E0912 17:13:25.615247 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.621037 kubelet[3436]: E0912 17:13:25.620432 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.621037 kubelet[3436]: W0912 17:13:25.620473 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.621037 kubelet[3436]: E0912 17:13:25.620508 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.623706 kubelet[3436]: E0912 17:13:25.622131 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.623706 kubelet[3436]: W0912 17:13:25.622157 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.623706 kubelet[3436]: E0912 17:13:25.622188 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.623706 kubelet[3436]: E0912 17:13:25.623245 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.623706 kubelet[3436]: W0912 17:13:25.623272 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.623706 kubelet[3436]: E0912 17:13:25.623303 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.624090 kubelet[3436]: E0912 17:13:25.623717 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.624090 kubelet[3436]: W0912 17:13:25.623738 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.624090 kubelet[3436]: E0912 17:13:25.623763 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.624246 kubelet[3436]: E0912 17:13:25.624148 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.624246 kubelet[3436]: W0912 17:13:25.624170 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.624246 kubelet[3436]: E0912 17:13:25.624197 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.628053 kubelet[3436]: E0912 17:13:25.624632 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.628053 kubelet[3436]: W0912 17:13:25.624670 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.628053 kubelet[3436]: E0912 17:13:25.624740 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.628053 kubelet[3436]: E0912 17:13:25.625338 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.628053 kubelet[3436]: W0912 17:13:25.626072 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.628053 kubelet[3436]: E0912 17:13:25.626162 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.628053 kubelet[3436]: E0912 17:13:25.627304 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.628053 kubelet[3436]: W0912 17:13:25.627335 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.628053 kubelet[3436]: E0912 17:13:25.627370 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.628053 kubelet[3436]: E0912 17:13:25.627880 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.628679 kubelet[3436]: W0912 17:13:25.627906 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.628679 kubelet[3436]: E0912 17:13:25.627939 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.632576 kubelet[3436]: E0912 17:13:25.628828 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.632576 kubelet[3436]: W0912 17:13:25.628890 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.632576 kubelet[3436]: E0912 17:13:25.628927 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.632576 kubelet[3436]: E0912 17:13:25.630063 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.632576 kubelet[3436]: W0912 17:13:25.630138 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.632576 kubelet[3436]: E0912 17:13:25.630834 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.635732 kubelet[3436]: E0912 17:13:25.633701 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.635732 kubelet[3436]: W0912 17:13:25.635514 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.635732 kubelet[3436]: E0912 17:13:25.635646 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.639731 kubelet[3436]: E0912 17:13:25.638491 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.639872 kubelet[3436]: W0912 17:13:25.639759 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.639872 kubelet[3436]: E0912 17:13:25.639808 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.640412 kubelet[3436]: I0912 17:13:25.639906 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb4fm\" (UniqueName: \"kubernetes.io/projected/5a763ccd-0bce-40dd-95cb-421d108768a3-kube-api-access-fb4fm\") pod \"csi-node-driver-cg78j\" (UID: \"5a763ccd-0bce-40dd-95cb-421d108768a3\") " pod="calico-system/csi-node-driver-cg78j" Sep 12 17:13:25.642682 kubelet[3436]: E0912 17:13:25.642373 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.642682 kubelet[3436]: W0912 17:13:25.642463 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.642682 kubelet[3436]: E0912 17:13:25.642554 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.644128 kubelet[3436]: E0912 17:13:25.643649 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.644128 kubelet[3436]: W0912 17:13:25.643727 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.644128 kubelet[3436]: E0912 17:13:25.643764 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.646105 kubelet[3436]: E0912 17:13:25.645569 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.646105 kubelet[3436]: W0912 17:13:25.645686 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.646105 kubelet[3436]: E0912 17:13:25.645779 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.648371 kubelet[3436]: I0912 17:13:25.647711 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5a763ccd-0bce-40dd-95cb-421d108768a3-registration-dir\") pod \"csi-node-driver-cg78j\" (UID: \"5a763ccd-0bce-40dd-95cb-421d108768a3\") " pod="calico-system/csi-node-driver-cg78j" Sep 12 17:13:25.649553 kubelet[3436]: E0912 17:13:25.648686 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.649553 kubelet[3436]: W0912 17:13:25.648945 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.649553 kubelet[3436]: E0912 17:13:25.649438 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.651484 kubelet[3436]: E0912 17:13:25.651280 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.651484 kubelet[3436]: W0912 17:13:25.651486 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.651484 kubelet[3436]: E0912 17:13:25.651527 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.653913 kubelet[3436]: E0912 17:13:25.653361 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.653913 kubelet[3436]: W0912 17:13:25.653408 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.653913 kubelet[3436]: E0912 17:13:25.653445 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.653913 kubelet[3436]: I0912 17:13:25.653507 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a763ccd-0bce-40dd-95cb-421d108768a3-kubelet-dir\") pod \"csi-node-driver-cg78j\" (UID: \"5a763ccd-0bce-40dd-95cb-421d108768a3\") " pod="calico-system/csi-node-driver-cg78j" Sep 12 17:13:25.656233 kubelet[3436]: E0912 17:13:25.655752 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.656233 kubelet[3436]: W0912 17:13:25.655800 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.656233 kubelet[3436]: E0912 17:13:25.655838 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.656233 kubelet[3436]: I0912 17:13:25.655899 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5a763ccd-0bce-40dd-95cb-421d108768a3-socket-dir\") pod \"csi-node-driver-cg78j\" (UID: \"5a763ccd-0bce-40dd-95cb-421d108768a3\") " pod="calico-system/csi-node-driver-cg78j" Sep 12 17:13:25.659357 kubelet[3436]: E0912 17:13:25.658912 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.659357 kubelet[3436]: W0912 17:13:25.658955 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.659357 kubelet[3436]: E0912 17:13:25.658990 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.660172 kubelet[3436]: E0912 17:13:25.659871 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.660172 kubelet[3436]: W0912 17:13:25.659913 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.660172 kubelet[3436]: E0912 17:13:25.659947 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.660826 kubelet[3436]: E0912 17:13:25.660786 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.660826 kubelet[3436]: W0912 17:13:25.660892 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.660826 kubelet[3436]: E0912 17:13:25.660930 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.660826 kubelet[3436]: I0912 17:13:25.661018 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5a763ccd-0bce-40dd-95cb-421d108768a3-varrun\") pod \"csi-node-driver-cg78j\" (UID: \"5a763ccd-0bce-40dd-95cb-421d108768a3\") " pod="calico-system/csi-node-driver-cg78j" Sep 12 17:13:25.661980 kubelet[3436]: E0912 17:13:25.661939 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.662414 kubelet[3436]: W0912 17:13:25.662146 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.662414 kubelet[3436]: E0912 17:13:25.662190 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.663409 kubelet[3436]: E0912 17:13:25.663366 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.663695 kubelet[3436]: W0912 17:13:25.663580 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.665177 kubelet[3436]: E0912 17:13:25.664774 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.665657 kubelet[3436]: E0912 17:13:25.665559 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.665823 kubelet[3436]: W0912 17:13:25.665791 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.665954 kubelet[3436]: E0912 17:13:25.665922 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.666682 kubelet[3436]: E0912 17:13:25.666484 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.666682 kubelet[3436]: W0912 17:13:25.666519 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.666682 kubelet[3436]: E0912 17:13:25.666550 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.691040 containerd[2020]: time="2025-09-12T17:13:25.690709345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:13:25.691040 containerd[2020]: time="2025-09-12T17:13:25.690824737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:13:25.691937 containerd[2020]: time="2025-09-12T17:13:25.690916345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:25.693697 containerd[2020]: time="2025-09-12T17:13:25.693355105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:25.763258 kubelet[3436]: E0912 17:13:25.763201 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.763258 kubelet[3436]: W0912 17:13:25.763246 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.763505 kubelet[3436]: E0912 17:13:25.763282 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.765459 kubelet[3436]: E0912 17:13:25.765333 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.765459 kubelet[3436]: W0912 17:13:25.765378 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.765459 kubelet[3436]: E0912 17:13:25.765412 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.766851 kubelet[3436]: E0912 17:13:25.766796 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.766851 kubelet[3436]: W0912 17:13:25.766839 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.767087 kubelet[3436]: E0912 17:13:25.766875 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.768981 systemd[1]: Started cri-containerd-38825309cd01953e53a0774771bfc4561a07ab2f67fbd3f775de7cb483690922.scope - libcontainer container 38825309cd01953e53a0774771bfc4561a07ab2f67fbd3f775de7cb483690922. Sep 12 17:13:25.773559 kubelet[3436]: E0912 17:13:25.773463 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.773559 kubelet[3436]: W0912 17:13:25.773509 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.773559 kubelet[3436]: E0912 17:13:25.773546 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.777337 kubelet[3436]: E0912 17:13:25.776993 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.777337 kubelet[3436]: W0912 17:13:25.777035 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.777337 kubelet[3436]: E0912 17:13:25.777097 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.778433 kubelet[3436]: E0912 17:13:25.778102 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.778433 kubelet[3436]: W0912 17:13:25.778142 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.778433 kubelet[3436]: E0912 17:13:25.778191 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.779379 kubelet[3436]: E0912 17:13:25.779341 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.780041 kubelet[3436]: W0912 17:13:25.779611 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.780041 kubelet[3436]: E0912 17:13:25.779663 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.781082 kubelet[3436]: E0912 17:13:25.780842 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.781082 kubelet[3436]: W0912 17:13:25.780879 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.781082 kubelet[3436]: E0912 17:13:25.780935 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.782024 kubelet[3436]: E0912 17:13:25.781970 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.782024 kubelet[3436]: W0912 17:13:25.782013 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.782218 kubelet[3436]: E0912 17:13:25.782049 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.784197 kubelet[3436]: E0912 17:13:25.784134 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.784197 kubelet[3436]: W0912 17:13:25.784181 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.784781 kubelet[3436]: E0912 17:13:25.784219 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.786111 kubelet[3436]: E0912 17:13:25.786035 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.786307 kubelet[3436]: W0912 17:13:25.786209 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.786877 kubelet[3436]: E0912 17:13:25.786573 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.788337 kubelet[3436]: E0912 17:13:25.788274 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.788337 kubelet[3436]: W0912 17:13:25.788322 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.788916 kubelet[3436]: E0912 17:13:25.788360 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.789679 kubelet[3436]: E0912 17:13:25.789513 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.789679 kubelet[3436]: W0912 17:13:25.789556 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.789679 kubelet[3436]: E0912 17:13:25.789620 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.791325 kubelet[3436]: E0912 17:13:25.791268 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.791325 kubelet[3436]: W0912 17:13:25.791313 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.792546 kubelet[3436]: E0912 17:13:25.791350 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.792546 kubelet[3436]: E0912 17:13:25.792392 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.792546 kubelet[3436]: W0912 17:13:25.792424 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.792546 kubelet[3436]: E0912 17:13:25.792458 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.794221 kubelet[3436]: E0912 17:13:25.793927 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.794221 kubelet[3436]: W0912 17:13:25.794182 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.794221 kubelet[3436]: E0912 17:13:25.794228 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.796495 kubelet[3436]: E0912 17:13:25.796438 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.796495 kubelet[3436]: W0912 17:13:25.796482 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.796758 kubelet[3436]: E0912 17:13:25.796526 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.798267 kubelet[3436]: E0912 17:13:25.797660 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.798410 kubelet[3436]: W0912 17:13:25.798263 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.798410 kubelet[3436]: E0912 17:13:25.798314 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.800179 kubelet[3436]: E0912 17:13:25.800097 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.800179 kubelet[3436]: W0912 17:13:25.800141 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.800179 kubelet[3436]: E0912 17:13:25.800178 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.801539 kubelet[3436]: E0912 17:13:25.801470 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.801539 kubelet[3436]: W0912 17:13:25.801514 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.801539 kubelet[3436]: E0912 17:13:25.801552 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.802695 kubelet[3436]: E0912 17:13:25.802632 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.802695 kubelet[3436]: W0912 17:13:25.802678 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.802894 kubelet[3436]: E0912 17:13:25.802716 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.804176 kubelet[3436]: E0912 17:13:25.803825 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.804176 kubelet[3436]: W0912 17:13:25.803866 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.804176 kubelet[3436]: E0912 17:13:25.803900 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.804978 kubelet[3436]: E0912 17:13:25.804933 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.805352 kubelet[3436]: W0912 17:13:25.805136 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.805352 kubelet[3436]: E0912 17:13:25.805183 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.807807 kubelet[3436]: E0912 17:13:25.807764 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.808010 kubelet[3436]: W0912 17:13:25.807977 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.808368 kubelet[3436]: E0912 17:13:25.808169 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.808963 kubelet[3436]: E0912 17:13:25.808922 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.809935 kubelet[3436]: W0912 17:13:25.809711 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.809935 kubelet[3436]: E0912 17:13:25.809771 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:25.852456 kubelet[3436]: E0912 17:13:25.851848 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:25.852456 kubelet[3436]: W0912 17:13:25.851892 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:25.852456 kubelet[3436]: E0912 17:13:25.851932 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:26.018319 containerd[2020]: time="2025-09-12T17:13:26.017149846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kkr6n,Uid:6bf54107-d9fe-4f98-9d98-6db8ba46a29b,Namespace:calico-system,Attempt:0,} returns sandbox id \"38825309cd01953e53a0774771bfc4561a07ab2f67fbd3f775de7cb483690922\"" Sep 12 17:13:26.021671 containerd[2020]: time="2025-09-12T17:13:26.021445090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:13:26.025247 kubelet[3436]: E0912 17:13:26.025091 3436 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Sep 12 17:13:26.027472 kubelet[3436]: E0912 17:13:26.027260 3436 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/332ed956-844e-45bd-81f9-04a228aa8f33-typha-certs podName:332ed956-844e-45bd-81f9-04a228aa8f33 nodeName:}" failed. No retries permitted until 2025-09-12 17:13:26.527197014 +0000 UTC m=+34.740280415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/332ed956-844e-45bd-81f9-04a228aa8f33-typha-certs") pod "calico-typha-7f5888fc7d-8gnw8" (UID: "332ed956-844e-45bd-81f9-04a228aa8f33") : failed to sync secret cache: timed out waiting for the condition Sep 12 17:13:26.081923 kubelet[3436]: E0912 17:13:26.081236 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:26.082261 kubelet[3436]: W0912 17:13:26.082122 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:26.082261 kubelet[3436]: E0912 17:13:26.082180 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:26.183759 kubelet[3436]: E0912 17:13:26.183187 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:26.183759 kubelet[3436]: W0912 17:13:26.183226 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:26.183759 kubelet[3436]: E0912 17:13:26.183261 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:26.286448 kubelet[3436]: E0912 17:13:26.286157 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:26.286448 kubelet[3436]: W0912 17:13:26.286206 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:26.286448 kubelet[3436]: E0912 17:13:26.286242 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:26.389136 kubelet[3436]: E0912 17:13:26.388091 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:26.389136 kubelet[3436]: W0912 17:13:26.388248 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:26.389136 kubelet[3436]: E0912 17:13:26.388288 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:26.491111 kubelet[3436]: E0912 17:13:26.491056 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:26.491327 kubelet[3436]: W0912 17:13:26.491122 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:26.491327 kubelet[3436]: E0912 17:13:26.491166 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:26.594205 kubelet[3436]: E0912 17:13:26.594148 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:26.594407 kubelet[3436]: W0912 17:13:26.594227 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:26.594407 kubelet[3436]: E0912 17:13:26.594289 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:26.595626 kubelet[3436]: E0912 17:13:26.595299 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:26.595626 kubelet[3436]: W0912 17:13:26.595337 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:26.595626 kubelet[3436]: E0912 17:13:26.595371 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:26.596378 kubelet[3436]: E0912 17:13:26.596090 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:26.596378 kubelet[3436]: W0912 17:13:26.596124 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:26.596378 kubelet[3436]: E0912 17:13:26.596157 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:26.596856 kubelet[3436]: E0912 17:13:26.596823 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:26.597005 kubelet[3436]: W0912 17:13:26.596974 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:26.597144 kubelet[3436]: E0912 17:13:26.597114 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:26.598132 kubelet[3436]: E0912 17:13:26.598093 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:26.599207 kubelet[3436]: W0912 17:13:26.598337 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:26.599207 kubelet[3436]: E0912 17:13:26.598392 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:26.609852 kubelet[3436]: E0912 17:13:26.609813 3436 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:13:26.610089 kubelet[3436]: W0912 17:13:26.610037 3436 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:13:26.610696 kubelet[3436]: E0912 17:13:26.610660 3436 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:13:26.627274 containerd[2020]: time="2025-09-12T17:13:26.627212941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f5888fc7d-8gnw8,Uid:332ed956-844e-45bd-81f9-04a228aa8f33,Namespace:calico-system,Attempt:0,}" Sep 12 17:13:26.736799 containerd[2020]: time="2025-09-12T17:13:26.735205058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:13:26.736799 containerd[2020]: time="2025-09-12T17:13:26.735340502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:13:26.736799 containerd[2020]: time="2025-09-12T17:13:26.735383834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:26.736799 containerd[2020]: time="2025-09-12T17:13:26.735543794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:26.815294 systemd[1]: Started cri-containerd-50aaddca5daf24361e2268083faf30d9d8979db5c9457decedac7d918931a3db.scope - libcontainer container 50aaddca5daf24361e2268083faf30d9d8979db5c9457decedac7d918931a3db. Sep 12 17:13:27.010808 containerd[2020]: time="2025-09-12T17:13:27.009148091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f5888fc7d-8gnw8,Uid:332ed956-844e-45bd-81f9-04a228aa8f33,Namespace:calico-system,Attempt:0,} returns sandbox id \"50aaddca5daf24361e2268083faf30d9d8979db5c9457decedac7d918931a3db\"" Sep 12 17:13:27.033824 kubelet[3436]: E0912 17:13:27.033734 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cg78j" podUID="5a763ccd-0bce-40dd-95cb-421d108768a3" Sep 12 17:13:27.053841 systemd[1]: run-containerd-runc-k8s.io-50aaddca5daf24361e2268083faf30d9d8979db5c9457decedac7d918931a3db-runc.fyfwcr.mount: Deactivated successfully. Sep 12 17:13:27.349671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2303220346.mount: Deactivated successfully. Sep 12 17:13:27.598727 containerd[2020]: time="2025-09-12T17:13:27.598651178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:27.602957 containerd[2020]: time="2025-09-12T17:13:27.602780018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5636193" Sep 12 17:13:27.606174 containerd[2020]: time="2025-09-12T17:13:27.606102590Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:27.612986 containerd[2020]: time="2025-09-12T17:13:27.612888638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:27.615906 containerd[2020]: time="2025-09-12T17:13:27.615825206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.594301708s" Sep 12 17:13:27.615906 containerd[2020]: time="2025-09-12T17:13:27.615895538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 17:13:27.621055 containerd[2020]: time="2025-09-12T17:13:27.620979926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:13:27.627741 containerd[2020]: time="2025-09-12T17:13:27.627672158Z" level=info msg="CreateContainer within sandbox \"38825309cd01953e53a0774771bfc4561a07ab2f67fbd3f775de7cb483690922\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:13:27.664761 containerd[2020]: time="2025-09-12T17:13:27.664262115Z" level=info msg="CreateContainer within sandbox \"38825309cd01953e53a0774771bfc4561a07ab2f67fbd3f775de7cb483690922\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"41cf40468516b0ea19a9265fa53d800e340bb695f0e835315b0dddc207957bdc\"" Sep 12 17:13:27.666553 containerd[2020]: time="2025-09-12T17:13:27.666360519Z" level=info msg="StartContainer for \"41cf40468516b0ea19a9265fa53d800e340bb695f0e835315b0dddc207957bdc\"" Sep 12 17:13:27.750916 systemd[1]: Started cri-containerd-41cf40468516b0ea19a9265fa53d800e340bb695f0e835315b0dddc207957bdc.scope - libcontainer container 41cf40468516b0ea19a9265fa53d800e340bb695f0e835315b0dddc207957bdc. Sep 12 17:13:27.816837 containerd[2020]: time="2025-09-12T17:13:27.816736047Z" level=info msg="StartContainer for \"41cf40468516b0ea19a9265fa53d800e340bb695f0e835315b0dddc207957bdc\" returns successfully" Sep 12 17:13:27.870967 systemd[1]: cri-containerd-41cf40468516b0ea19a9265fa53d800e340bb695f0e835315b0dddc207957bdc.scope: Deactivated successfully. Sep 12 17:13:28.112893 containerd[2020]: time="2025-09-12T17:13:28.111762445Z" level=info msg="shim disconnected" id=41cf40468516b0ea19a9265fa53d800e340bb695f0e835315b0dddc207957bdc namespace=k8s.io Sep 12 17:13:28.112893 containerd[2020]: time="2025-09-12T17:13:28.111886489Z" level=warning msg="cleaning up after shim disconnected" id=41cf40468516b0ea19a9265fa53d800e340bb695f0e835315b0dddc207957bdc namespace=k8s.io Sep 12 17:13:28.112893 containerd[2020]: time="2025-09-12T17:13:28.112751569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:13:29.031774 kubelet[3436]: E0912 17:13:29.031482 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cg78j" podUID="5a763ccd-0bce-40dd-95cb-421d108768a3" Sep 12 17:13:29.717063 containerd[2020]: time="2025-09-12T17:13:29.716980061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:29.719826 containerd[2020]: time="2025-09-12T17:13:29.719766773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=31736396" Sep 12 17:13:29.722085 containerd[2020]: time="2025-09-12T17:13:29.721984613Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:29.727681 containerd[2020]: time="2025-09-12T17:13:29.727570949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:29.728862 containerd[2020]: time="2025-09-12T17:13:29.728691005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.107639331s" Sep 12 17:13:29.728862 containerd[2020]: time="2025-09-12T17:13:29.728742833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 17:13:29.733229 containerd[2020]: time="2025-09-12T17:13:29.731915237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:13:29.760241 containerd[2020]: time="2025-09-12T17:13:29.760190297Z" level=info msg="CreateContainer within sandbox \"50aaddca5daf24361e2268083faf30d9d8979db5c9457decedac7d918931a3db\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:13:29.793212 containerd[2020]: time="2025-09-12T17:13:29.793157261Z" level=info msg="CreateContainer within sandbox \"50aaddca5daf24361e2268083faf30d9d8979db5c9457decedac7d918931a3db\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a412649a0341a444754cdd993384badcbcd172bb8e874592f6138a6013698190\"" Sep 12 17:13:29.796095 containerd[2020]: time="2025-09-12T17:13:29.794174633Z" level=info msg="StartContainer for \"a412649a0341a444754cdd993384badcbcd172bb8e874592f6138a6013698190\"" Sep 12 17:13:29.849916 systemd[1]: Started cri-containerd-a412649a0341a444754cdd993384badcbcd172bb8e874592f6138a6013698190.scope - libcontainer container a412649a0341a444754cdd993384badcbcd172bb8e874592f6138a6013698190. Sep 12 17:13:29.914771 containerd[2020]: time="2025-09-12T17:13:29.914524626Z" level=info msg="StartContainer for \"a412649a0341a444754cdd993384badcbcd172bb8e874592f6138a6013698190\" returns successfully" Sep 12 17:13:31.033763 kubelet[3436]: E0912 17:13:31.031454 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cg78j" podUID="5a763ccd-0bce-40dd-95cb-421d108768a3" Sep 12 17:13:31.308714 kubelet[3436]: I0912 17:13:31.306054 3436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f5888fc7d-8gnw8" podStartSLOduration=4.588226839 podStartE2EDuration="7.305991953s" podCreationTimestamp="2025-09-12 17:13:24 +0000 UTC" firstStartedPulling="2025-09-12 17:13:27.013005995 +0000 UTC m=+35.226089396" lastFinishedPulling="2025-09-12 17:13:29.730771109 +0000 UTC m=+37.943854510" observedRunningTime="2025-09-12 17:13:30.335768356 +0000 UTC m=+38.548851805" watchObservedRunningTime="2025-09-12 17:13:31.305991953 +0000 UTC m=+39.519075390" Sep 12 17:13:32.910172 containerd[2020]: time="2025-09-12T17:13:32.910112397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:32.912303 containerd[2020]: time="2025-09-12T17:13:32.912230901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 17:13:32.912693 containerd[2020]: time="2025-09-12T17:13:32.912453561Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:32.917391 containerd[2020]: time="2025-09-12T17:13:32.916575849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:32.918448 containerd[2020]: time="2025-09-12T17:13:32.918380433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.18639904s" Sep 12 17:13:32.918448 containerd[2020]: time="2025-09-12T17:13:32.918444981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 17:13:32.927142 containerd[2020]: time="2025-09-12T17:13:32.927067881Z" level=info msg="CreateContainer within sandbox \"38825309cd01953e53a0774771bfc4561a07ab2f67fbd3f775de7cb483690922\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:13:32.955656 containerd[2020]: time="2025-09-12T17:13:32.954192309Z" level=info msg="CreateContainer within sandbox \"38825309cd01953e53a0774771bfc4561a07ab2f67fbd3f775de7cb483690922\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fe8fab8e2c3de82d973e9b48842ef76aebbfd7325a4d5db9d7f750f734ae0254\"" Sep 12 17:13:32.958185 containerd[2020]: time="2025-09-12T17:13:32.958050141Z" level=info msg="StartContainer for \"fe8fab8e2c3de82d973e9b48842ef76aebbfd7325a4d5db9d7f750f734ae0254\"" Sep 12 17:13:32.961072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4104990903.mount: Deactivated successfully. Sep 12 17:13:33.027307 systemd[1]: run-containerd-runc-k8s.io-fe8fab8e2c3de82d973e9b48842ef76aebbfd7325a4d5db9d7f750f734ae0254-runc.eyuS2o.mount: Deactivated successfully. Sep 12 17:13:33.032043 kubelet[3436]: E0912 17:13:33.031959 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cg78j" podUID="5a763ccd-0bce-40dd-95cb-421d108768a3" Sep 12 17:13:33.043799 systemd[1]: Started cri-containerd-fe8fab8e2c3de82d973e9b48842ef76aebbfd7325a4d5db9d7f750f734ae0254.scope - libcontainer container fe8fab8e2c3de82d973e9b48842ef76aebbfd7325a4d5db9d7f750f734ae0254. Sep 12 17:13:33.113974 containerd[2020]: time="2025-09-12T17:13:33.113451450Z" level=info msg="StartContainer for \"fe8fab8e2c3de82d973e9b48842ef76aebbfd7325a4d5db9d7f750f734ae0254\" returns successfully" Sep 12 17:13:34.583720 containerd[2020]: time="2025-09-12T17:13:34.583644345Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:13:34.589568 systemd[1]: cri-containerd-fe8fab8e2c3de82d973e9b48842ef76aebbfd7325a4d5db9d7f750f734ae0254.scope: Deactivated successfully. Sep 12 17:13:34.590038 systemd[1]: cri-containerd-fe8fab8e2c3de82d973e9b48842ef76aebbfd7325a4d5db9d7f750f734ae0254.scope: Consumed 1.032s CPU time. Sep 12 17:13:34.636936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe8fab8e2c3de82d973e9b48842ef76aebbfd7325a4d5db9d7f750f734ae0254-rootfs.mount: Deactivated successfully. Sep 12 17:13:34.689091 kubelet[3436]: I0912 17:13:34.687547 3436 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:13:34.866532 systemd[1]: Created slice kubepods-burstable-pod19645dd7_24fd_4986_a5a4_93fb6c895793.slice - libcontainer container kubepods-burstable-pod19645dd7_24fd_4986_a5a4_93fb6c895793.slice. Sep 12 17:13:34.868749 kubelet[3436]: I0912 17:13:34.867934 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19645dd7-24fd-4986-a5a4-93fb6c895793-config-volume\") pod \"coredns-674b8bbfcf-bj892\" (UID: \"19645dd7-24fd-4986-a5a4-93fb6c895793\") " pod="kube-system/coredns-674b8bbfcf-bj892" Sep 12 17:13:34.870005 kubelet[3436]: I0912 17:13:34.868231 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlvld\" (UniqueName: \"kubernetes.io/projected/19645dd7-24fd-4986-a5a4-93fb6c895793-kube-api-access-qlvld\") pod \"coredns-674b8bbfcf-bj892\" (UID: \"19645dd7-24fd-4986-a5a4-93fb6c895793\") " pod="kube-system/coredns-674b8bbfcf-bj892" Sep 12 17:13:34.950302 systemd[1]: Created slice kubepods-burstable-podcd1a85c3_f4b2_49ef_9fc6_6bb14d28eb17.slice - libcontainer container kubepods-burstable-podcd1a85c3_f4b2_49ef_9fc6_6bb14d28eb17.slice. Sep 12 17:13:34.970468 kubelet[3436]: I0912 17:13:34.970038 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ktr2\" (UniqueName: \"kubernetes.io/projected/cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17-kube-api-access-6ktr2\") pod \"coredns-674b8bbfcf-jkf69\" (UID: \"cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17\") " pod="kube-system/coredns-674b8bbfcf-jkf69" Sep 12 17:13:34.970468 kubelet[3436]: I0912 17:13:34.970171 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17-config-volume\") pod \"coredns-674b8bbfcf-jkf69\" (UID: \"cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17\") " pod="kube-system/coredns-674b8bbfcf-jkf69" Sep 12 17:13:35.030707 systemd[1]: Created slice kubepods-besteffort-pod379e7da0_b94f_4766_ab46_ea563b6dd6ae.slice - libcontainer container kubepods-besteffort-pod379e7da0_b94f_4766_ab46_ea563b6dd6ae.slice. Sep 12 17:13:35.074523 kubelet[3436]: I0912 17:13:35.070901 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fm95\" (UniqueName: \"kubernetes.io/projected/20d4d8f5-a479-45e2-a048-aea30bb1c7f9-kube-api-access-7fm95\") pod \"calico-kube-controllers-56c7c98bd9-hsxmb\" (UID: \"20d4d8f5-a479-45e2-a048-aea30bb1c7f9\") " pod="calico-system/calico-kube-controllers-56c7c98bd9-hsxmb" Sep 12 17:13:35.074523 kubelet[3436]: I0912 17:13:35.071004 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/379e7da0-b94f-4766-ab46-ea563b6dd6ae-calico-apiserver-certs\") pod \"calico-apiserver-55d9c6cf68-z4b6c\" (UID: \"379e7da0-b94f-4766-ab46-ea563b6dd6ae\") " pod="calico-apiserver/calico-apiserver-55d9c6cf68-z4b6c" Sep 12 17:13:35.074523 kubelet[3436]: I0912 17:13:35.071044 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg75s\" (UniqueName: \"kubernetes.io/projected/379e7da0-b94f-4766-ab46-ea563b6dd6ae-kube-api-access-hg75s\") pod \"calico-apiserver-55d9c6cf68-z4b6c\" (UID: \"379e7da0-b94f-4766-ab46-ea563b6dd6ae\") " pod="calico-apiserver/calico-apiserver-55d9c6cf68-z4b6c" Sep 12 17:13:35.074523 kubelet[3436]: I0912 17:13:35.071170 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20d4d8f5-a479-45e2-a048-aea30bb1c7f9-tigera-ca-bundle\") pod \"calico-kube-controllers-56c7c98bd9-hsxmb\" (UID: \"20d4d8f5-a479-45e2-a048-aea30bb1c7f9\") " pod="calico-system/calico-kube-controllers-56c7c98bd9-hsxmb" Sep 12 17:13:35.093657 systemd[1]: Created slice kubepods-besteffort-pod5a763ccd_0bce_40dd_95cb_421d108768a3.slice - libcontainer container kubepods-besteffort-pod5a763ccd_0bce_40dd_95cb_421d108768a3.slice. Sep 12 17:13:35.132482 containerd[2020]: time="2025-09-12T17:13:35.132318116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cg78j,Uid:5a763ccd-0bce-40dd-95cb-421d108768a3,Namespace:calico-system,Attempt:0,}" Sep 12 17:13:35.164796 systemd[1]: Created slice kubepods-besteffort-pod20d4d8f5_a479_45e2_a048_aea30bb1c7f9.slice - libcontainer container kubepods-besteffort-pod20d4d8f5_a479_45e2_a048_aea30bb1c7f9.slice. Sep 12 17:13:35.177212 kubelet[3436]: I0912 17:13:35.175852 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bb29a4ce-08f1-4876-8ac8-6366015c6f10-calico-apiserver-certs\") pod \"calico-apiserver-55d9c6cf68-vs5cz\" (UID: \"bb29a4ce-08f1-4876-8ac8-6366015c6f10\") " pod="calico-apiserver/calico-apiserver-55d9c6cf68-vs5cz" Sep 12 17:13:35.177212 kubelet[3436]: I0912 17:13:35.176076 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbjp4\" (UniqueName: \"kubernetes.io/projected/bb29a4ce-08f1-4876-8ac8-6366015c6f10-kube-api-access-qbjp4\") pod \"calico-apiserver-55d9c6cf68-vs5cz\" (UID: \"bb29a4ce-08f1-4876-8ac8-6366015c6f10\") " pod="calico-apiserver/calico-apiserver-55d9c6cf68-vs5cz" Sep 12 17:13:35.192244 systemd[1]: Created slice kubepods-besteffort-pod23768b54_b53d_46a7_81d3_60c23e1d7799.slice - libcontainer container kubepods-besteffort-pod23768b54_b53d_46a7_81d3_60c23e1d7799.slice. Sep 12 17:13:35.206064 containerd[2020]: time="2025-09-12T17:13:35.205906700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bj892,Uid:19645dd7-24fd-4986-a5a4-93fb6c895793,Namespace:kube-system,Attempt:0,}" Sep 12 17:13:35.276931 kubelet[3436]: I0912 17:13:35.276867 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23768b54-b53d-46a7-81d3-60c23e1d7799-whisker-ca-bundle\") pod \"whisker-855747c87d-4t672\" (UID: \"23768b54-b53d-46a7-81d3-60c23e1d7799\") " pod="calico-system/whisker-855747c87d-4t672" Sep 12 17:13:35.278660 kubelet[3436]: I0912 17:13:35.277295 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/23768b54-b53d-46a7-81d3-60c23e1d7799-whisker-backend-key-pair\") pod \"whisker-855747c87d-4t672\" (UID: \"23768b54-b53d-46a7-81d3-60c23e1d7799\") " pod="calico-system/whisker-855747c87d-4t672" Sep 12 17:13:35.278660 kubelet[3436]: I0912 17:13:35.277433 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-872lp\" (UniqueName: \"kubernetes.io/projected/23768b54-b53d-46a7-81d3-60c23e1d7799-kube-api-access-872lp\") pod \"whisker-855747c87d-4t672\" (UID: \"23768b54-b53d-46a7-81d3-60c23e1d7799\") " pod="calico-system/whisker-855747c87d-4t672" Sep 12 17:13:35.287104 systemd[1]: Created slice kubepods-besteffort-podfbef718a_5d24_4aa3_8e46_22c641d6b1fe.slice - libcontainer container kubepods-besteffort-podfbef718a_5d24_4aa3_8e46_22c641d6b1fe.slice. Sep 12 17:13:35.289044 containerd[2020]: time="2025-09-12T17:13:35.288673280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkf69,Uid:cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17,Namespace:kube-system,Attempt:0,}" Sep 12 17:13:35.323505 systemd[1]: Created slice kubepods-besteffort-podbb29a4ce_08f1_4876_8ac8_6366015c6f10.slice - libcontainer container kubepods-besteffort-podbb29a4ce_08f1_4876_8ac8_6366015c6f10.slice. Sep 12 17:13:35.326450 containerd[2020]: time="2025-09-12T17:13:35.325782813Z" level=info msg="shim disconnected" id=fe8fab8e2c3de82d973e9b48842ef76aebbfd7325a4d5db9d7f750f734ae0254 namespace=k8s.io Sep 12 17:13:35.326450 containerd[2020]: time="2025-09-12T17:13:35.325910445Z" level=warning msg="cleaning up after shim disconnected" id=fe8fab8e2c3de82d973e9b48842ef76aebbfd7325a4d5db9d7f750f734ae0254 namespace=k8s.io Sep 12 17:13:35.326450 containerd[2020]: time="2025-09-12T17:13:35.325939113Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:13:35.361639 containerd[2020]: time="2025-09-12T17:13:35.360993045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d9c6cf68-z4b6c,Uid:379e7da0-b94f-4766-ab46-ea563b6dd6ae,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:13:35.379088 kubelet[3436]: I0912 17:13:35.378990 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cjrc\" (UniqueName: \"kubernetes.io/projected/fbef718a-5d24-4aa3-8e46-22c641d6b1fe-kube-api-access-4cjrc\") pod \"goldmane-54d579b49d-28q4g\" (UID: \"fbef718a-5d24-4aa3-8e46-22c641d6b1fe\") " pod="calico-system/goldmane-54d579b49d-28q4g" Sep 12 17:13:35.380469 kubelet[3436]: I0912 17:13:35.380269 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbef718a-5d24-4aa3-8e46-22c641d6b1fe-config\") pod \"goldmane-54d579b49d-28q4g\" (UID: \"fbef718a-5d24-4aa3-8e46-22c641d6b1fe\") " pod="calico-system/goldmane-54d579b49d-28q4g" Sep 12 17:13:35.383835 kubelet[3436]: I0912 17:13:35.380744 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fbef718a-5d24-4aa3-8e46-22c641d6b1fe-goldmane-key-pair\") pod \"goldmane-54d579b49d-28q4g\" (UID: \"fbef718a-5d24-4aa3-8e46-22c641d6b1fe\") " pod="calico-system/goldmane-54d579b49d-28q4g" Sep 12 17:13:35.383835 kubelet[3436]: I0912 17:13:35.380854 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbef718a-5d24-4aa3-8e46-22c641d6b1fe-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-28q4g\" (UID: \"fbef718a-5d24-4aa3-8e46-22c641d6b1fe\") " pod="calico-system/goldmane-54d579b49d-28q4g" Sep 12 17:13:35.432225 containerd[2020]: time="2025-09-12T17:13:35.432082449Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:13:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:13:35.516439 containerd[2020]: time="2025-09-12T17:13:35.516345490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56c7c98bd9-hsxmb,Uid:20d4d8f5-a479-45e2-a048-aea30bb1c7f9,Namespace:calico-system,Attempt:0,}" Sep 12 17:13:35.516932 containerd[2020]: time="2025-09-12T17:13:35.516874330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-855747c87d-4t672,Uid:23768b54-b53d-46a7-81d3-60c23e1d7799,Namespace:calico-system,Attempt:0,}" Sep 12 17:13:35.609951 containerd[2020]: time="2025-09-12T17:13:35.609861742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-28q4g,Uid:fbef718a-5d24-4aa3-8e46-22c641d6b1fe,Namespace:calico-system,Attempt:0,}" Sep 12 17:13:35.649738 containerd[2020]: time="2025-09-12T17:13:35.647017078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d9c6cf68-vs5cz,Uid:bb29a4ce-08f1-4876-8ac8-6366015c6f10,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:13:35.943285 containerd[2020]: time="2025-09-12T17:13:35.943186080Z" level=error msg="Failed to destroy network for sandbox \"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:35.946813 containerd[2020]: time="2025-09-12T17:13:35.946725492Z" level=error msg="encountered an error cleaning up failed sandbox \"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:35.947000 containerd[2020]: time="2025-09-12T17:13:35.946863876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bj892,Uid:19645dd7-24fd-4986-a5a4-93fb6c895793,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:35.948831 kubelet[3436]: E0912 17:13:35.947221 3436 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:35.948831 kubelet[3436]: E0912 17:13:35.947320 3436 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bj892" Sep 12 17:13:35.948831 kubelet[3436]: E0912 17:13:35.947360 3436 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bj892" Sep 12 17:13:35.949419 kubelet[3436]: E0912 17:13:35.947441 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bj892_kube-system(19645dd7-24fd-4986-a5a4-93fb6c895793)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bj892_kube-system(19645dd7-24fd-4986-a5a4-93fb6c895793)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bj892" podUID="19645dd7-24fd-4986-a5a4-93fb6c895793" Sep 12 17:13:35.974164 containerd[2020]: time="2025-09-12T17:13:35.973588800Z" level=error msg="Failed to destroy network for sandbox \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:35.974686 containerd[2020]: time="2025-09-12T17:13:35.974386632Z" level=error msg="encountered an error cleaning up failed sandbox \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:35.975496 containerd[2020]: time="2025-09-12T17:13:35.974503980Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cg78j,Uid:5a763ccd-0bce-40dd-95cb-421d108768a3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:35.976466 kubelet[3436]: E0912 17:13:35.976364 3436 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:35.976692 kubelet[3436]: E0912 17:13:35.976476 3436 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cg78j" Sep 12 17:13:35.976692 kubelet[3436]: E0912 17:13:35.976519 3436 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cg78j" Sep 12 17:13:35.981289 kubelet[3436]: E0912 17:13:35.978792 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cg78j_calico-system(5a763ccd-0bce-40dd-95cb-421d108768a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cg78j_calico-system(5a763ccd-0bce-40dd-95cb-421d108768a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cg78j" podUID="5a763ccd-0bce-40dd-95cb-421d108768a3" Sep 12 17:13:35.995739 containerd[2020]: time="2025-09-12T17:13:35.995562204Z" level=error msg="Failed to destroy network for sandbox \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.004641 containerd[2020]: time="2025-09-12T17:13:36.003389468Z" level=error msg="encountered an error cleaning up failed sandbox \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.004841 containerd[2020]: time="2025-09-12T17:13:36.004683992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkf69,Uid:cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.005260 kubelet[3436]: E0912 17:13:36.005098 3436 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.005260 kubelet[3436]: E0912 17:13:36.005204 3436 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jkf69" Sep 12 17:13:36.005260 kubelet[3436]: E0912 17:13:36.005248 3436 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jkf69" Sep 12 17:13:36.005559 kubelet[3436]: E0912 17:13:36.005348 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jkf69_kube-system(cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jkf69_kube-system(cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jkf69" podUID="cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17" Sep 12 17:13:36.011991 containerd[2020]: time="2025-09-12T17:13:36.011889548Z" level=error msg="Failed to destroy network for sandbox \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.013509 containerd[2020]: time="2025-09-12T17:13:36.012559040Z" level=error msg="encountered an error cleaning up failed sandbox \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.013509 containerd[2020]: time="2025-09-12T17:13:36.012710816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56c7c98bd9-hsxmb,Uid:20d4d8f5-a479-45e2-a048-aea30bb1c7f9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.014541 kubelet[3436]: E0912 17:13:36.013083 3436 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.014541 kubelet[3436]: E0912 17:13:36.013196 3436 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56c7c98bd9-hsxmb" Sep 12 17:13:36.014541 kubelet[3436]: E0912 17:13:36.013237 3436 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56c7c98bd9-hsxmb" Sep 12 17:13:36.016406 kubelet[3436]: E0912 17:13:36.013332 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56c7c98bd9-hsxmb_calico-system(20d4d8f5-a479-45e2-a048-aea30bb1c7f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56c7c98bd9-hsxmb_calico-system(20d4d8f5-a479-45e2-a048-aea30bb1c7f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56c7c98bd9-hsxmb" podUID="20d4d8f5-a479-45e2-a048-aea30bb1c7f9" Sep 12 17:13:36.032620 containerd[2020]: time="2025-09-12T17:13:36.031779536Z" level=error msg="Failed to destroy network for sandbox \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.034325 containerd[2020]: time="2025-09-12T17:13:36.034215092Z" level=error msg="encountered an error cleaning up failed sandbox \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.034685 containerd[2020]: time="2025-09-12T17:13:36.034337420Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d9c6cf68-z4b6c,Uid:379e7da0-b94f-4766-ab46-ea563b6dd6ae,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.036491 kubelet[3436]: E0912 17:13:36.034953 3436 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.036491 kubelet[3436]: E0912 17:13:36.035050 3436 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d9c6cf68-z4b6c" Sep 12 17:13:36.036491 kubelet[3436]: E0912 17:13:36.035089 3436 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d9c6cf68-z4b6c" Sep 12 17:13:36.036841 kubelet[3436]: E0912 17:13:36.035169 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55d9c6cf68-z4b6c_calico-apiserver(379e7da0-b94f-4766-ab46-ea563b6dd6ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55d9c6cf68-z4b6c_calico-apiserver(379e7da0-b94f-4766-ab46-ea563b6dd6ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d9c6cf68-z4b6c" podUID="379e7da0-b94f-4766-ab46-ea563b6dd6ae" Sep 12 17:13:36.083859 containerd[2020]: time="2025-09-12T17:13:36.083741408Z" level=error msg="Failed to destroy network for sandbox \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.084689 containerd[2020]: time="2025-09-12T17:13:36.084546848Z" level=error msg="encountered an error cleaning up failed sandbox \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.084889 containerd[2020]: time="2025-09-12T17:13:36.084761204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-855747c87d-4t672,Uid:23768b54-b53d-46a7-81d3-60c23e1d7799,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.085390 kubelet[3436]: E0912 17:13:36.085313 3436 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.085552 kubelet[3436]: E0912 17:13:36.085443 3436 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-855747c87d-4t672" Sep 12 17:13:36.085552 kubelet[3436]: E0912 17:13:36.085528 3436 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-855747c87d-4t672" Sep 12 17:13:36.086014 kubelet[3436]: E0912 17:13:36.085681 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-855747c87d-4t672_calico-system(23768b54-b53d-46a7-81d3-60c23e1d7799)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-855747c87d-4t672_calico-system(23768b54-b53d-46a7-81d3-60c23e1d7799)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-855747c87d-4t672" podUID="23768b54-b53d-46a7-81d3-60c23e1d7799" Sep 12 17:13:36.113295 containerd[2020]: time="2025-09-12T17:13:36.113097716Z" level=error msg="Failed to destroy network for sandbox \"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.118432 containerd[2020]: time="2025-09-12T17:13:36.116840708Z" level=error msg="encountered an error cleaning up failed sandbox \"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.119197 containerd[2020]: time="2025-09-12T17:13:36.118763001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d9c6cf68-vs5cz,Uid:bb29a4ce-08f1-4876-8ac8-6366015c6f10,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.122638 kubelet[3436]: E0912 17:13:36.121832 3436 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.122638 kubelet[3436]: E0912 17:13:36.121927 3436 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d9c6cf68-vs5cz" Sep 12 17:13:36.122638 kubelet[3436]: E0912 17:13:36.121963 3436 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d9c6cf68-vs5cz" Sep 12 17:13:36.123556 kubelet[3436]: E0912 17:13:36.122065 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55d9c6cf68-vs5cz_calico-apiserver(bb29a4ce-08f1-4876-8ac8-6366015c6f10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55d9c6cf68-vs5cz_calico-apiserver(bb29a4ce-08f1-4876-8ac8-6366015c6f10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d9c6cf68-vs5cz" podUID="bb29a4ce-08f1-4876-8ac8-6366015c6f10" Sep 12 17:13:36.142234 containerd[2020]: time="2025-09-12T17:13:36.142152801Z" level=error msg="Failed to destroy network for sandbox \"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.142849 containerd[2020]: time="2025-09-12T17:13:36.142781289Z" level=error msg="encountered an error cleaning up failed sandbox \"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.142985 containerd[2020]: time="2025-09-12T17:13:36.142886553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-28q4g,Uid:fbef718a-5d24-4aa3-8e46-22c641d6b1fe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.143320 kubelet[3436]: E0912 17:13:36.143261 3436 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.143469 kubelet[3436]: E0912 17:13:36.143360 3436 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-28q4g" Sep 12 17:13:36.143469 kubelet[3436]: E0912 17:13:36.143400 3436 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-28q4g" Sep 12 17:13:36.143619 kubelet[3436]: E0912 17:13:36.143478 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-28q4g_calico-system(fbef718a-5d24-4aa3-8e46-22c641d6b1fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-28q4g_calico-system(fbef718a-5d24-4aa3-8e46-22c641d6b1fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-28q4g" podUID="fbef718a-5d24-4aa3-8e46-22c641d6b1fe" Sep 12 17:13:36.319645 containerd[2020]: time="2025-09-12T17:13:36.314758173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:13:36.325520 kubelet[3436]: I0912 17:13:36.325070 3436 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Sep 12 17:13:36.327548 containerd[2020]: time="2025-09-12T17:13:36.327160906Z" level=info msg="StopPodSandbox for \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\"" Sep 12 17:13:36.327871 containerd[2020]: time="2025-09-12T17:13:36.327703294Z" level=info msg="Ensure that sandbox ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c in task-service has been cleanup successfully" Sep 12 17:13:36.335276 kubelet[3436]: I0912 17:13:36.335195 3436 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Sep 12 17:13:36.337951 containerd[2020]: time="2025-09-12T17:13:36.337371382Z" level=info msg="StopPodSandbox for \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\"" Sep 12 17:13:36.346241 containerd[2020]: time="2025-09-12T17:13:36.345439198Z" level=info msg="Ensure that sandbox f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8 in task-service has been cleanup successfully" Sep 12 17:13:36.348243 kubelet[3436]: I0912 17:13:36.347377 3436 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468" Sep 12 17:13:36.350432 containerd[2020]: time="2025-09-12T17:13:36.349109494Z" level=info msg="StopPodSandbox for \"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\"" Sep 12 17:13:36.350432 containerd[2020]: time="2025-09-12T17:13:36.349519534Z" level=info msg="Ensure that sandbox 752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468 in task-service has been cleanup successfully" Sep 12 17:13:36.364883 kubelet[3436]: I0912 17:13:36.364837 3436 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf" Sep 12 17:13:36.396102 containerd[2020]: time="2025-09-12T17:13:36.396021850Z" level=info msg="StopPodSandbox for \"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf\"" Sep 12 17:13:36.396404 containerd[2020]: time="2025-09-12T17:13:36.396336154Z" level=info msg="Ensure that sandbox 0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf in task-service has been cleanup successfully" Sep 12 17:13:36.419677 kubelet[3436]: I0912 17:13:36.419156 3436 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Sep 12 17:13:36.423633 containerd[2020]: time="2025-09-12T17:13:36.422116234Z" level=info msg="StopPodSandbox for \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\"" Sep 12 17:13:36.429500 containerd[2020]: time="2025-09-12T17:13:36.428752834Z" level=info msg="Ensure that sandbox 6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6 in task-service has been cleanup successfully" Sep 12 17:13:36.448947 kubelet[3436]: I0912 17:13:36.448893 3436 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Sep 12 17:13:36.455849 containerd[2020]: time="2025-09-12T17:13:36.455755210Z" level=info msg="StopPodSandbox for \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\"" Sep 12 17:13:36.456900 containerd[2020]: time="2025-09-12T17:13:36.456800002Z" level=info msg="Ensure that sandbox 29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7 in task-service has been cleanup successfully" Sep 12 17:13:36.480057 kubelet[3436]: I0912 17:13:36.480005 3436 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Sep 12 17:13:36.491585 containerd[2020]: time="2025-09-12T17:13:36.489695650Z" level=info msg="StopPodSandbox for \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\"" Sep 12 17:13:36.492254 containerd[2020]: time="2025-09-12T17:13:36.492008482Z" level=info msg="Ensure that sandbox 89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317 in task-service has been cleanup successfully" Sep 12 17:13:36.532129 kubelet[3436]: I0912 17:13:36.530949 3436 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f" Sep 12 17:13:36.538712 containerd[2020]: time="2025-09-12T17:13:36.538399283Z" level=info msg="StopPodSandbox for \"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f\"" Sep 12 17:13:36.541406 containerd[2020]: time="2025-09-12T17:13:36.541320983Z" level=info msg="Ensure that sandbox 8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f in task-service has been cleanup successfully" Sep 12 17:13:36.564663 containerd[2020]: time="2025-09-12T17:13:36.564532799Z" level=error msg="StopPodSandbox for \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\" failed" error="failed to destroy network for sandbox \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.566956 kubelet[3436]: E0912 17:13:36.566670 3436 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Sep 12 17:13:36.566956 kubelet[3436]: E0912 17:13:36.566765 3436 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8"} Sep 12 17:13:36.566956 kubelet[3436]: E0912 17:13:36.566856 3436 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"20d4d8f5-a479-45e2-a048-aea30bb1c7f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:13:36.566956 kubelet[3436]: E0912 17:13:36.566898 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"20d4d8f5-a479-45e2-a048-aea30bb1c7f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56c7c98bd9-hsxmb" podUID="20d4d8f5-a479-45e2-a048-aea30bb1c7f9" Sep 12 17:13:36.591730 containerd[2020]: time="2025-09-12T17:13:36.591500135Z" level=error msg="StopPodSandbox for \"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\" failed" error="failed to destroy network for sandbox \"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.594046 kubelet[3436]: E0912 17:13:36.592732 3436 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468" Sep 12 17:13:36.594046 kubelet[3436]: E0912 17:13:36.592808 3436 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468"} Sep 12 17:13:36.594046 kubelet[3436]: E0912 17:13:36.592864 3436 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19645dd7-24fd-4986-a5a4-93fb6c895793\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:13:36.594046 kubelet[3436]: E0912 17:13:36.592909 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19645dd7-24fd-4986-a5a4-93fb6c895793\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bj892" podUID="19645dd7-24fd-4986-a5a4-93fb6c895793" Sep 12 17:13:36.611200 containerd[2020]: time="2025-09-12T17:13:36.611103935Z" level=error msg="StopPodSandbox for \"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf\" failed" error="failed to destroy network for sandbox \"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.612756 kubelet[3436]: E0912 17:13:36.611534 3436 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf" Sep 12 17:13:36.612756 kubelet[3436]: E0912 17:13:36.611624 3436 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf"} Sep 12 17:13:36.612756 kubelet[3436]: E0912 17:13:36.611741 3436 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bb29a4ce-08f1-4876-8ac8-6366015c6f10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:13:36.612756 kubelet[3436]: E0912 17:13:36.611794 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bb29a4ce-08f1-4876-8ac8-6366015c6f10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d9c6cf68-vs5cz" podUID="bb29a4ce-08f1-4876-8ac8-6366015c6f10" Sep 12 17:13:36.637653 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f-shm.mount: Deactivated successfully. Sep 12 17:13:36.638933 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf-shm.mount: Deactivated successfully. Sep 12 17:13:36.639410 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8-shm.mount: Deactivated successfully. Sep 12 17:13:36.639580 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c-shm.mount: Deactivated successfully. Sep 12 17:13:36.639845 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7-shm.mount: Deactivated successfully. Sep 12 17:13:36.639996 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317-shm.mount: Deactivated successfully. Sep 12 17:13:36.640140 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468-shm.mount: Deactivated successfully. Sep 12 17:13:36.640293 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6-shm.mount: Deactivated successfully. Sep 12 17:13:36.655254 containerd[2020]: time="2025-09-12T17:13:36.655138283Z" level=error msg="StopPodSandbox for \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\" failed" error="failed to destroy network for sandbox \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.655928 kubelet[3436]: E0912 17:13:36.655511 3436 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Sep 12 17:13:36.655928 kubelet[3436]: E0912 17:13:36.655619 3436 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317"} Sep 12 17:13:36.655928 kubelet[3436]: E0912 17:13:36.655730 3436 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:13:36.655928 kubelet[3436]: E0912 17:13:36.655783 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jkf69" podUID="cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17" Sep 12 17:13:36.666514 containerd[2020]: time="2025-09-12T17:13:36.666408623Z" level=error msg="StopPodSandbox for \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\" failed" error="failed to destroy network for sandbox \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.667148 kubelet[3436]: E0912 17:13:36.666790 3436 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Sep 12 17:13:36.667148 kubelet[3436]: E0912 17:13:36.666859 3436 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c"} Sep 12 17:13:36.667148 kubelet[3436]: E0912 17:13:36.666912 3436 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23768b54-b53d-46a7-81d3-60c23e1d7799\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:13:36.667148 kubelet[3436]: E0912 17:13:36.666953 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23768b54-b53d-46a7-81d3-60c23e1d7799\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-855747c87d-4t672" podUID="23768b54-b53d-46a7-81d3-60c23e1d7799" Sep 12 17:13:36.675947 containerd[2020]: time="2025-09-12T17:13:36.675703655Z" level=error msg="StopPodSandbox for \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\" failed" error="failed to destroy network for sandbox \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.676553 kubelet[3436]: E0912 17:13:36.676301 3436 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Sep 12 17:13:36.676553 kubelet[3436]: E0912 17:13:36.676389 3436 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6"} Sep 12 17:13:36.676553 kubelet[3436]: E0912 17:13:36.676449 3436 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a763ccd-0bce-40dd-95cb-421d108768a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:13:36.676553 kubelet[3436]: E0912 17:13:36.676490 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a763ccd-0bce-40dd-95cb-421d108768a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cg78j" podUID="5a763ccd-0bce-40dd-95cb-421d108768a3" Sep 12 17:13:36.703994 containerd[2020]: time="2025-09-12T17:13:36.703880255Z" level=error msg="StopPodSandbox for \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\" failed" error="failed to destroy network for sandbox \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.704416 kubelet[3436]: E0912 17:13:36.704278 3436 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Sep 12 17:13:36.704416 kubelet[3436]: E0912 17:13:36.704365 3436 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7"} Sep 12 17:13:36.705797 kubelet[3436]: E0912 17:13:36.704432 3436 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"379e7da0-b94f-4766-ab46-ea563b6dd6ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:13:36.705797 kubelet[3436]: E0912 17:13:36.704476 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"379e7da0-b94f-4766-ab46-ea563b6dd6ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d9c6cf68-z4b6c" podUID="379e7da0-b94f-4766-ab46-ea563b6dd6ae" Sep 12 17:13:36.713181 containerd[2020]: time="2025-09-12T17:13:36.713112191Z" level=error msg="StopPodSandbox for \"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f\" failed" error="failed to destroy network for sandbox \"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:13:36.713899 kubelet[3436]: E0912 17:13:36.713825 3436 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f" Sep 12 17:13:36.714562 kubelet[3436]: E0912 17:13:36.713911 3436 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f"} Sep 12 17:13:36.714562 kubelet[3436]: E0912 17:13:36.713983 3436 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fbef718a-5d24-4aa3-8e46-22c641d6b1fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:13:36.714562 kubelet[3436]: E0912 17:13:36.714031 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fbef718a-5d24-4aa3-8e46-22c641d6b1fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-28q4g" podUID="fbef718a-5d24-4aa3-8e46-22c641d6b1fe" Sep 12 17:13:42.344207 systemd[1]: Started sshd@9-172.31.30.188:22-147.75.109.163:60360.service - OpenSSH per-connection server daemon (147.75.109.163:60360). Sep 12 17:13:42.568404 sshd[4577]: Accepted publickey for core from 147.75.109.163 port 60360 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:13:42.572032 sshd[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:13:42.587681 systemd-logind[1992]: New session 10 of user core. Sep 12 17:13:42.596439 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:13:43.027318 sshd[4577]: pam_unix(sshd:session): session closed for user core Sep 12 17:13:43.038313 systemd[1]: sshd@9-172.31.30.188:22-147.75.109.163:60360.service: Deactivated successfully. Sep 12 17:13:43.046396 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:13:43.049296 systemd-logind[1992]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:13:43.053420 systemd-logind[1992]: Removed session 10. Sep 12 17:13:43.680467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3000890215.mount: Deactivated successfully. Sep 12 17:13:43.778033 containerd[2020]: time="2025-09-12T17:13:43.777942943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:43.782347 containerd[2020]: time="2025-09-12T17:13:43.782253283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 17:13:43.786553 containerd[2020]: time="2025-09-12T17:13:43.786461863Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:43.797487 containerd[2020]: time="2025-09-12T17:13:43.797374075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:43.802070 containerd[2020]: time="2025-09-12T17:13:43.800403715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 7.485573506s" Sep 12 17:13:43.802070 containerd[2020]: time="2025-09-12T17:13:43.800487523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 17:13:43.856078 containerd[2020]: time="2025-09-12T17:13:43.856014343Z" level=info msg="CreateContainer within sandbox \"38825309cd01953e53a0774771bfc4561a07ab2f67fbd3f775de7cb483690922\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:13:43.931586 containerd[2020]: time="2025-09-12T17:13:43.931498003Z" level=info msg="CreateContainer within sandbox \"38825309cd01953e53a0774771bfc4561a07ab2f67fbd3f775de7cb483690922\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"430cec1fae1b111a693e68cdcc82b17668aa28022e7f47f02aba2aa8393f4fad\"" Sep 12 17:13:43.933109 containerd[2020]: time="2025-09-12T17:13:43.933053359Z" level=info msg="StartContainer for \"430cec1fae1b111a693e68cdcc82b17668aa28022e7f47f02aba2aa8393f4fad\"" Sep 12 17:13:43.987673 systemd[1]: Started cri-containerd-430cec1fae1b111a693e68cdcc82b17668aa28022e7f47f02aba2aa8393f4fad.scope - libcontainer container 430cec1fae1b111a693e68cdcc82b17668aa28022e7f47f02aba2aa8393f4fad. Sep 12 17:13:44.064727 containerd[2020]: time="2025-09-12T17:13:44.063986656Z" level=info msg="StartContainer for \"430cec1fae1b111a693e68cdcc82b17668aa28022e7f47f02aba2aa8393f4fad\" returns successfully" Sep 12 17:13:44.366837 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:13:44.366996 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:13:44.658622 kubelet[3436]: I0912 17:13:44.657832 3436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kkr6n" podStartSLOduration=1.8721600619999998 podStartE2EDuration="19.657774607s" podCreationTimestamp="2025-09-12 17:13:25 +0000 UTC" firstStartedPulling="2025-09-12 17:13:26.02091775 +0000 UTC m=+34.234001163" lastFinishedPulling="2025-09-12 17:13:43.806532295 +0000 UTC m=+52.019615708" observedRunningTime="2025-09-12 17:13:44.655100515 +0000 UTC m=+52.868183988" watchObservedRunningTime="2025-09-12 17:13:44.657774607 +0000 UTC m=+52.870858020" Sep 12 17:13:44.687823 containerd[2020]: time="2025-09-12T17:13:44.685269019Z" level=info msg="StopPodSandbox for \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\"" Sep 12 17:13:45.067367 containerd[2020]: 2025-09-12 17:13:44.938 [INFO][4666] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Sep 12 17:13:45.067367 containerd[2020]: 2025-09-12 17:13:44.940 [INFO][4666] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" iface="eth0" netns="/var/run/netns/cni-62d0baa9-af66-2962-d535-ce42c666e57d" Sep 12 17:13:45.067367 containerd[2020]: 2025-09-12 17:13:44.941 [INFO][4666] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" iface="eth0" netns="/var/run/netns/cni-62d0baa9-af66-2962-d535-ce42c666e57d" Sep 12 17:13:45.067367 containerd[2020]: 2025-09-12 17:13:44.943 [INFO][4666] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" iface="eth0" netns="/var/run/netns/cni-62d0baa9-af66-2962-d535-ce42c666e57d" Sep 12 17:13:45.067367 containerd[2020]: 2025-09-12 17:13:44.943 [INFO][4666] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Sep 12 17:13:45.067367 containerd[2020]: 2025-09-12 17:13:44.943 [INFO][4666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Sep 12 17:13:45.067367 containerd[2020]: 2025-09-12 17:13:45.030 [INFO][4686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" HandleID="k8s-pod-network.ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Workload="ip--172--31--30--188-k8s-whisker--855747c87d--4t672-eth0" Sep 12 17:13:45.067367 containerd[2020]: 2025-09-12 17:13:45.033 [INFO][4686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:45.067367 containerd[2020]: 2025-09-12 17:13:45.033 [INFO][4686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:45.067367 containerd[2020]: 2025-09-12 17:13:45.053 [WARNING][4686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" HandleID="k8s-pod-network.ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Workload="ip--172--31--30--188-k8s-whisker--855747c87d--4t672-eth0" Sep 12 17:13:45.067367 containerd[2020]: 2025-09-12 17:13:45.053 [INFO][4686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" HandleID="k8s-pod-network.ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Workload="ip--172--31--30--188-k8s-whisker--855747c87d--4t672-eth0" Sep 12 17:13:45.067367 containerd[2020]: 2025-09-12 17:13:45.057 [INFO][4686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:45.067367 containerd[2020]: 2025-09-12 17:13:45.064 [INFO][4666] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Sep 12 17:13:45.072369 containerd[2020]: time="2025-09-12T17:13:45.069841841Z" level=info msg="TearDown network for sandbox \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\" successfully" Sep 12 17:13:45.072369 containerd[2020]: time="2025-09-12T17:13:45.069907301Z" level=info msg="StopPodSandbox for \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\" returns successfully" Sep 12 17:13:45.081265 systemd[1]: run-netns-cni\x2d62d0baa9\x2daf66\x2d2962\x2dd535\x2dce42c666e57d.mount: Deactivated successfully. Sep 12 17:13:45.285469 kubelet[3436]: I0912 17:13:45.285214 3436 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23768b54-b53d-46a7-81d3-60c23e1d7799-whisker-ca-bundle\") pod \"23768b54-b53d-46a7-81d3-60c23e1d7799\" (UID: \"23768b54-b53d-46a7-81d3-60c23e1d7799\") " Sep 12 17:13:45.286565 kubelet[3436]: I0912 17:13:45.286032 3436 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-872lp\" (UniqueName: \"kubernetes.io/projected/23768b54-b53d-46a7-81d3-60c23e1d7799-kube-api-access-872lp\") pod \"23768b54-b53d-46a7-81d3-60c23e1d7799\" (UID: \"23768b54-b53d-46a7-81d3-60c23e1d7799\") " Sep 12 17:13:45.286565 kubelet[3436]: I0912 17:13:45.286108 3436 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/23768b54-b53d-46a7-81d3-60c23e1d7799-whisker-backend-key-pair\") pod \"23768b54-b53d-46a7-81d3-60c23e1d7799\" (UID: \"23768b54-b53d-46a7-81d3-60c23e1d7799\") " Sep 12 17:13:45.286565 kubelet[3436]: I0912 17:13:45.286171 3436 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23768b54-b53d-46a7-81d3-60c23e1d7799-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "23768b54-b53d-46a7-81d3-60c23e1d7799" (UID: "23768b54-b53d-46a7-81d3-60c23e1d7799"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:13:45.287414 kubelet[3436]: I0912 17:13:45.287024 3436 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23768b54-b53d-46a7-81d3-60c23e1d7799-whisker-ca-bundle\") on node \"ip-172-31-30-188\" DevicePath \"\"" Sep 12 17:13:45.312699 kubelet[3436]: I0912 17:13:45.311034 3436 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23768b54-b53d-46a7-81d3-60c23e1d7799-kube-api-access-872lp" (OuterVolumeSpecName: "kube-api-access-872lp") pod "23768b54-b53d-46a7-81d3-60c23e1d7799" (UID: "23768b54-b53d-46a7-81d3-60c23e1d7799"). InnerVolumeSpecName "kube-api-access-872lp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:13:45.312699 kubelet[3436]: I0912 17:13:45.311230 3436 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23768b54-b53d-46a7-81d3-60c23e1d7799-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "23768b54-b53d-46a7-81d3-60c23e1d7799" (UID: "23768b54-b53d-46a7-81d3-60c23e1d7799"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:13:45.316124 systemd[1]: var-lib-kubelet-pods-23768b54\x2db53d\x2d46a7\x2d81d3\x2d60c23e1d7799-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:13:45.322012 systemd[1]: var-lib-kubelet-pods-23768b54\x2db53d\x2d46a7\x2d81d3\x2d60c23e1d7799-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d872lp.mount: Deactivated successfully. Sep 12 17:13:45.388760 kubelet[3436]: I0912 17:13:45.388308 3436 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-872lp\" (UniqueName: \"kubernetes.io/projected/23768b54-b53d-46a7-81d3-60c23e1d7799-kube-api-access-872lp\") on node \"ip-172-31-30-188\" DevicePath \"\"" Sep 12 17:13:45.388760 kubelet[3436]: I0912 17:13:45.388364 3436 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/23768b54-b53d-46a7-81d3-60c23e1d7799-whisker-backend-key-pair\") on node \"ip-172-31-30-188\" DevicePath \"\"" Sep 12 17:13:45.596476 systemd[1]: Removed slice kubepods-besteffort-pod23768b54_b53d_46a7_81d3_60c23e1d7799.slice - libcontainer container kubepods-besteffort-pod23768b54_b53d_46a7_81d3_60c23e1d7799.slice. Sep 12 17:13:45.737952 systemd[1]: Created slice kubepods-besteffort-podbbf1192a_e737_4fc7_9849_9832a05706f3.slice - libcontainer container kubepods-besteffort-podbbf1192a_e737_4fc7_9849_9832a05706f3.slice. Sep 12 17:13:45.893296 kubelet[3436]: I0912 17:13:45.893012 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bbf1192a-e737-4fc7-9849-9832a05706f3-whisker-backend-key-pair\") pod \"whisker-6bc75cf845-pvlzj\" (UID: \"bbf1192a-e737-4fc7-9849-9832a05706f3\") " pod="calico-system/whisker-6bc75cf845-pvlzj" Sep 12 17:13:45.893296 kubelet[3436]: I0912 17:13:45.893092 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbf1192a-e737-4fc7-9849-9832a05706f3-whisker-ca-bundle\") pod \"whisker-6bc75cf845-pvlzj\" (UID: \"bbf1192a-e737-4fc7-9849-9832a05706f3\") " pod="calico-system/whisker-6bc75cf845-pvlzj" Sep 12 17:13:45.893296 kubelet[3436]: I0912 17:13:45.893134 3436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49vdh\" (UniqueName: \"kubernetes.io/projected/bbf1192a-e737-4fc7-9849-9832a05706f3-kube-api-access-49vdh\") pod \"whisker-6bc75cf845-pvlzj\" (UID: \"bbf1192a-e737-4fc7-9849-9832a05706f3\") " pod="calico-system/whisker-6bc75cf845-pvlzj" Sep 12 17:13:46.037159 kubelet[3436]: I0912 17:13:46.036786 3436 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23768b54-b53d-46a7-81d3-60c23e1d7799" path="/var/lib/kubelet/pods/23768b54-b53d-46a7-81d3-60c23e1d7799/volumes" Sep 12 17:13:46.046446 containerd[2020]: time="2025-09-12T17:13:46.046384818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bc75cf845-pvlzj,Uid:bbf1192a-e737-4fc7-9849-9832a05706f3,Namespace:calico-system,Attempt:0,}" Sep 12 17:13:46.307238 (udev-worker)[4638]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:13:46.308161 systemd-networkd[1934]: caliafad6e6af0d: Link UP Sep 12 17:13:46.310333 systemd-networkd[1934]: caliafad6e6af0d: Gained carrier Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.142 [INFO][4730] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.168 [INFO][4730] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--188-k8s-whisker--6bc75cf845--pvlzj-eth0 whisker-6bc75cf845- calico-system bbf1192a-e737-4fc7-9849-9832a05706f3 966 0 2025-09-12 17:13:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bc75cf845 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-30-188 whisker-6bc75cf845-pvlzj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliafad6e6af0d [] [] }} ContainerID="9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" Namespace="calico-system" Pod="whisker-6bc75cf845-pvlzj" WorkloadEndpoint="ip--172--31--30--188-k8s-whisker--6bc75cf845--pvlzj-" Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.168 [INFO][4730] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" Namespace="calico-system" Pod="whisker-6bc75cf845-pvlzj" WorkloadEndpoint="ip--172--31--30--188-k8s-whisker--6bc75cf845--pvlzj-eth0" Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.217 [INFO][4741] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" HandleID="k8s-pod-network.9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" Workload="ip--172--31--30--188-k8s-whisker--6bc75cf845--pvlzj-eth0" Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.217 [INFO][4741] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" HandleID="k8s-pod-network.9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" Workload="ip--172--31--30--188-k8s-whisker--6bc75cf845--pvlzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-188", "pod":"whisker-6bc75cf845-pvlzj", "timestamp":"2025-09-12 17:13:46.217338703 +0000 UTC"}, Hostname:"ip-172-31-30-188", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.217 [INFO][4741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.217 [INFO][4741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.217 [INFO][4741] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-188' Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.233 [INFO][4741] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" host="ip-172-31-30-188" Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.242 [INFO][4741] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-188" Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.252 [INFO][4741] ipam/ipam.go 511: Trying affinity for 192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.256 [INFO][4741] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.260 [INFO][4741] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.260 [INFO][4741] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" host="ip-172-31-30-188" Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.263 [INFO][4741] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2 Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.269 [INFO][4741] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" host="ip-172-31-30-188" Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.280 [INFO][4741] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.1/26] block=192.168.103.0/26 handle="k8s-pod-network.9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" host="ip-172-31-30-188" Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.281 [INFO][4741] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.1/26] handle="k8s-pod-network.9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" host="ip-172-31-30-188" Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.281 [INFO][4741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:46.368967 containerd[2020]: 2025-09-12 17:13:46.281 [INFO][4741] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.1/26] IPv6=[] ContainerID="9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" HandleID="k8s-pod-network.9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" Workload="ip--172--31--30--188-k8s-whisker--6bc75cf845--pvlzj-eth0" Sep 12 17:13:46.380566 containerd[2020]: 2025-09-12 17:13:46.293 [INFO][4730] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" Namespace="calico-system" Pod="whisker-6bc75cf845-pvlzj" WorkloadEndpoint="ip--172--31--30--188-k8s-whisker--6bc75cf845--pvlzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-whisker--6bc75cf845--pvlzj-eth0", GenerateName:"whisker-6bc75cf845-", Namespace:"calico-system", SelfLink:"", UID:"bbf1192a-e737-4fc7-9849-9832a05706f3", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bc75cf845", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"", Pod:"whisker-6bc75cf845-pvlzj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.103.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliafad6e6af0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:46.380566 containerd[2020]: 2025-09-12 17:13:46.293 [INFO][4730] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.1/32] ContainerID="9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" Namespace="calico-system" Pod="whisker-6bc75cf845-pvlzj" WorkloadEndpoint="ip--172--31--30--188-k8s-whisker--6bc75cf845--pvlzj-eth0" Sep 12 17:13:46.380566 containerd[2020]: 2025-09-12 17:13:46.293 [INFO][4730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliafad6e6af0d ContainerID="9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" Namespace="calico-system" Pod="whisker-6bc75cf845-pvlzj" WorkloadEndpoint="ip--172--31--30--188-k8s-whisker--6bc75cf845--pvlzj-eth0" Sep 12 17:13:46.380566 containerd[2020]: 2025-09-12 17:13:46.310 [INFO][4730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" Namespace="calico-system" Pod="whisker-6bc75cf845-pvlzj" WorkloadEndpoint="ip--172--31--30--188-k8s-whisker--6bc75cf845--pvlzj-eth0" Sep 12 17:13:46.380566 containerd[2020]: 2025-09-12 17:13:46.312 [INFO][4730] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" Namespace="calico-system" Pod="whisker-6bc75cf845-pvlzj" WorkloadEndpoint="ip--172--31--30--188-k8s-whisker--6bc75cf845--pvlzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-whisker--6bc75cf845--pvlzj-eth0", GenerateName:"whisker-6bc75cf845-", Namespace:"calico-system", SelfLink:"", UID:"bbf1192a-e737-4fc7-9849-9832a05706f3", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bc75cf845", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2", Pod:"whisker-6bc75cf845-pvlzj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.103.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliafad6e6af0d", MAC:"e6:0f:96:4d:02:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:46.380566 containerd[2020]: 2025-09-12 17:13:46.355 [INFO][4730] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2" Namespace="calico-system" Pod="whisker-6bc75cf845-pvlzj" WorkloadEndpoint="ip--172--31--30--188-k8s-whisker--6bc75cf845--pvlzj-eth0" Sep 12 17:13:46.414649 containerd[2020]: time="2025-09-12T17:13:46.414343460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:13:46.414649 containerd[2020]: time="2025-09-12T17:13:46.414468740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:13:46.414649 containerd[2020]: time="2025-09-12T17:13:46.414497684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:46.422854 containerd[2020]: time="2025-09-12T17:13:46.414737780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:46.496735 systemd[1]: Started cri-containerd-9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2.scope - libcontainer container 9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2. Sep 12 17:13:46.710274 containerd[2020]: time="2025-09-12T17:13:46.709834461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bc75cf845-pvlzj,Uid:bbf1192a-e737-4fc7-9849-9832a05706f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2\"" Sep 12 17:13:46.724331 containerd[2020]: time="2025-09-12T17:13:46.723667917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:13:47.033934 containerd[2020]: time="2025-09-12T17:13:47.033752203Z" level=info msg="StopPodSandbox for \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\"" Sep 12 17:13:47.378873 containerd[2020]: 2025-09-12 17:13:47.256 [INFO][4890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Sep 12 17:13:47.378873 containerd[2020]: 2025-09-12 17:13:47.256 [INFO][4890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" iface="eth0" netns="/var/run/netns/cni-664bfce9-0075-9c9c-2aee-87dbceaf88e8" Sep 12 17:13:47.378873 containerd[2020]: 2025-09-12 17:13:47.257 [INFO][4890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" iface="eth0" netns="/var/run/netns/cni-664bfce9-0075-9c9c-2aee-87dbceaf88e8" Sep 12 17:13:47.378873 containerd[2020]: 2025-09-12 17:13:47.259 [INFO][4890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" iface="eth0" netns="/var/run/netns/cni-664bfce9-0075-9c9c-2aee-87dbceaf88e8" Sep 12 17:13:47.378873 containerd[2020]: 2025-09-12 17:13:47.259 [INFO][4890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Sep 12 17:13:47.378873 containerd[2020]: 2025-09-12 17:13:47.259 [INFO][4890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Sep 12 17:13:47.378873 containerd[2020]: 2025-09-12 17:13:47.345 [INFO][4898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" HandleID="k8s-pod-network.6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Workload="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" Sep 12 17:13:47.378873 containerd[2020]: 2025-09-12 17:13:47.347 [INFO][4898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:47.378873 containerd[2020]: 2025-09-12 17:13:47.347 [INFO][4898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:47.378873 containerd[2020]: 2025-09-12 17:13:47.364 [WARNING][4898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" HandleID="k8s-pod-network.6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Workload="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" Sep 12 17:13:47.378873 containerd[2020]: 2025-09-12 17:13:47.364 [INFO][4898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" HandleID="k8s-pod-network.6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Workload="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" Sep 12 17:13:47.378873 containerd[2020]: 2025-09-12 17:13:47.369 [INFO][4898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:47.378873 containerd[2020]: 2025-09-12 17:13:47.373 [INFO][4890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Sep 12 17:13:47.378873 containerd[2020]: time="2025-09-12T17:13:47.377836904Z" level=info msg="TearDown network for sandbox \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\" successfully" Sep 12 17:13:47.378873 containerd[2020]: time="2025-09-12T17:13:47.377901656Z" level=info msg="StopPodSandbox for \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\" returns successfully" Sep 12 17:13:47.387396 containerd[2020]: time="2025-09-12T17:13:47.382911812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cg78j,Uid:5a763ccd-0bce-40dd-95cb-421d108768a3,Namespace:calico-system,Attempt:1,}" Sep 12 17:13:47.394345 systemd[1]: run-netns-cni\x2d664bfce9\x2d0075\x2d9c9c\x2d2aee\x2d87dbceaf88e8.mount: Deactivated successfully. Sep 12 17:13:47.407016 systemd-networkd[1934]: caliafad6e6af0d: Gained IPv6LL Sep 12 17:13:47.788122 systemd-networkd[1934]: calid8c8efd4f4b: Link UP Sep 12 17:13:47.791364 systemd-networkd[1934]: calid8c8efd4f4b: Gained carrier Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.529 [INFO][4905] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.589 [INFO][4905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0 csi-node-driver- calico-system 5a763ccd-0bce-40dd-95cb-421d108768a3 984 0 2025-09-12 17:13:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-30-188 csi-node-driver-cg78j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid8c8efd4f4b [] [] }} ContainerID="05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" Namespace="calico-system" Pod="csi-node-driver-cg78j" WorkloadEndpoint="ip--172--31--30--188-k8s-csi--node--driver--cg78j-" Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.590 [INFO][4905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" Namespace="calico-system" Pod="csi-node-driver-cg78j" WorkloadEndpoint="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.682 [INFO][4917] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" HandleID="k8s-pod-network.05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" Workload="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.682 [INFO][4917] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" HandleID="k8s-pod-network.05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" Workload="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b870), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-188", "pod":"csi-node-driver-cg78j", "timestamp":"2025-09-12 17:13:47.682176466 +0000 UTC"}, Hostname:"ip-172-31-30-188", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.682 [INFO][4917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.683 [INFO][4917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.683 [INFO][4917] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-188' Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.719 [INFO][4917] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" host="ip-172-31-30-188" Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.729 [INFO][4917] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-188" Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.738 [INFO][4917] ipam/ipam.go 511: Trying affinity for 192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.741 [INFO][4917] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.747 [INFO][4917] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.747 [INFO][4917] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" host="ip-172-31-30-188" Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.750 [INFO][4917] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.761 [INFO][4917] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" host="ip-172-31-30-188" Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.775 [INFO][4917] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.2/26] block=192.168.103.0/26 handle="k8s-pod-network.05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" host="ip-172-31-30-188" Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.775 [INFO][4917] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.2/26] handle="k8s-pod-network.05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" host="ip-172-31-30-188" Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.775 [INFO][4917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:47.830723 containerd[2020]: 2025-09-12 17:13:47.775 [INFO][4917] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.2/26] IPv6=[] ContainerID="05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" HandleID="k8s-pod-network.05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" Workload="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" Sep 12 17:13:47.831983 containerd[2020]: 2025-09-12 17:13:47.781 [INFO][4905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" Namespace="calico-system" Pod="csi-node-driver-cg78j" WorkloadEndpoint="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a763ccd-0bce-40dd-95cb-421d108768a3", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"", Pod:"csi-node-driver-cg78j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid8c8efd4f4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:47.831983 containerd[2020]: 2025-09-12 17:13:47.781 [INFO][4905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.2/32] ContainerID="05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" Namespace="calico-system" Pod="csi-node-driver-cg78j" WorkloadEndpoint="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" Sep 12 17:13:47.831983 containerd[2020]: 2025-09-12 17:13:47.781 [INFO][4905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8c8efd4f4b ContainerID="05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" Namespace="calico-system" Pod="csi-node-driver-cg78j" WorkloadEndpoint="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" Sep 12 17:13:47.831983 containerd[2020]: 2025-09-12 17:13:47.791 [INFO][4905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" Namespace="calico-system" Pod="csi-node-driver-cg78j" WorkloadEndpoint="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" Sep 12 17:13:47.831983 containerd[2020]: 2025-09-12 17:13:47.794 [INFO][4905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" Namespace="calico-system" Pod="csi-node-driver-cg78j" WorkloadEndpoint="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a763ccd-0bce-40dd-95cb-421d108768a3", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd", Pod:"csi-node-driver-cg78j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid8c8efd4f4b", MAC:"72:9d:5f:50:c5:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:47.831983 containerd[2020]: 2025-09-12 17:13:47.824 [INFO][4905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd" Namespace="calico-system" Pod="csi-node-driver-cg78j" WorkloadEndpoint="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" Sep 12 17:13:47.875459 containerd[2020]: time="2025-09-12T17:13:47.875285399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:13:47.877329 containerd[2020]: time="2025-09-12T17:13:47.876968255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:13:47.877329 containerd[2020]: time="2025-09-12T17:13:47.877156715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:47.882138 containerd[2020]: time="2025-09-12T17:13:47.881532947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:47.965962 systemd[1]: Started cri-containerd-05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd.scope - libcontainer container 05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd. Sep 12 17:13:48.084983 systemd[1]: Started sshd@10-172.31.30.188:22-147.75.109.163:60366.service - OpenSSH per-connection server daemon (147.75.109.163:60366). Sep 12 17:13:48.161058 containerd[2020]: time="2025-09-12T17:13:48.160818704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cg78j,Uid:5a763ccd-0bce-40dd-95cb-421d108768a3,Namespace:calico-system,Attempt:1,} returns sandbox id \"05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd\"" Sep 12 17:13:48.309895 sshd[4972]: Accepted publickey for core from 147.75.109.163 port 60366 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:13:48.316062 sshd[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:13:48.331988 systemd-logind[1992]: New session 11 of user core. Sep 12 17:13:48.339536 kernel: bpftool[5009]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:13:48.339673 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:13:48.656067 sshd[4972]: pam_unix(sshd:session): session closed for user core Sep 12 17:13:48.665747 systemd[1]: sshd@10-172.31.30.188:22-147.75.109.163:60366.service: Deactivated successfully. Sep 12 17:13:48.666201 systemd-logind[1992]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:13:48.670792 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:13:48.677218 systemd-logind[1992]: Removed session 11. Sep 12 17:13:48.796489 systemd-networkd[1934]: vxlan.calico: Link UP Sep 12 17:13:48.796510 systemd-networkd[1934]: vxlan.calico: Gained carrier Sep 12 17:13:48.850256 (udev-worker)[4637]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:13:49.043300 containerd[2020]: time="2025-09-12T17:13:49.043230201Z" level=info msg="StopPodSandbox for \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\"" Sep 12 17:13:49.045664 containerd[2020]: time="2025-09-12T17:13:49.045528705Z" level=info msg="StopPodSandbox for \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\"" Sep 12 17:13:49.048668 containerd[2020]: time="2025-09-12T17:13:49.047737737Z" level=info msg="StopPodSandbox for \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\"" Sep 12 17:13:49.674192 containerd[2020]: 2025-09-12 17:13:49.377 [INFO][5083] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Sep 12 17:13:49.674192 containerd[2020]: 2025-09-12 17:13:49.382 [INFO][5083] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" iface="eth0" netns="/var/run/netns/cni-0c0e5cb5-30ca-245b-0980-ec44df6fe69c" Sep 12 17:13:49.674192 containerd[2020]: 2025-09-12 17:13:49.384 [INFO][5083] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" iface="eth0" netns="/var/run/netns/cni-0c0e5cb5-30ca-245b-0980-ec44df6fe69c" Sep 12 17:13:49.674192 containerd[2020]: 2025-09-12 17:13:49.389 [INFO][5083] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" iface="eth0" netns="/var/run/netns/cni-0c0e5cb5-30ca-245b-0980-ec44df6fe69c" Sep 12 17:13:49.674192 containerd[2020]: 2025-09-12 17:13:49.389 [INFO][5083] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Sep 12 17:13:49.674192 containerd[2020]: 2025-09-12 17:13:49.390 [INFO][5083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Sep 12 17:13:49.674192 containerd[2020]: 2025-09-12 17:13:49.609 [INFO][5110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" HandleID="k8s-pod-network.f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Workload="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" Sep 12 17:13:49.674192 containerd[2020]: 2025-09-12 17:13:49.610 [INFO][5110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:49.674192 containerd[2020]: 2025-09-12 17:13:49.611 [INFO][5110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:49.674192 containerd[2020]: 2025-09-12 17:13:49.654 [WARNING][5110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" HandleID="k8s-pod-network.f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Workload="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" Sep 12 17:13:49.674192 containerd[2020]: 2025-09-12 17:13:49.654 [INFO][5110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" HandleID="k8s-pod-network.f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Workload="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" Sep 12 17:13:49.674192 containerd[2020]: 2025-09-12 17:13:49.658 [INFO][5110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:49.674192 containerd[2020]: 2025-09-12 17:13:49.664 [INFO][5083] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Sep 12 17:13:49.680700 containerd[2020]: time="2025-09-12T17:13:49.678758016Z" level=info msg="TearDown network for sandbox \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\" successfully" Sep 12 17:13:49.680700 containerd[2020]: time="2025-09-12T17:13:49.678810432Z" level=info msg="StopPodSandbox for \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\" returns successfully" Sep 12 17:13:49.682659 containerd[2020]: time="2025-09-12T17:13:49.682491396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56c7c98bd9-hsxmb,Uid:20d4d8f5-a479-45e2-a048-aea30bb1c7f9,Namespace:calico-system,Attempt:1,}" Sep 12 17:13:49.685147 systemd[1]: run-netns-cni\x2d0c0e5cb5\x2d30ca\x2d245b\x2d0980\x2dec44df6fe69c.mount: Deactivated successfully. Sep 12 17:13:49.724567 containerd[2020]: 2025-09-12 17:13:49.481 [INFO][5094] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Sep 12 17:13:49.724567 containerd[2020]: 2025-09-12 17:13:49.484 [INFO][5094] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" iface="eth0" netns="/var/run/netns/cni-0a9324c7-eee3-f907-0b73-ed45316ccaf4" Sep 12 17:13:49.724567 containerd[2020]: 2025-09-12 17:13:49.484 [INFO][5094] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" iface="eth0" netns="/var/run/netns/cni-0a9324c7-eee3-f907-0b73-ed45316ccaf4" Sep 12 17:13:49.724567 containerd[2020]: 2025-09-12 17:13:49.485 [INFO][5094] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" iface="eth0" netns="/var/run/netns/cni-0a9324c7-eee3-f907-0b73-ed45316ccaf4" Sep 12 17:13:49.724567 containerd[2020]: 2025-09-12 17:13:49.486 [INFO][5094] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Sep 12 17:13:49.724567 containerd[2020]: 2025-09-12 17:13:49.486 [INFO][5094] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Sep 12 17:13:49.724567 containerd[2020]: 2025-09-12 17:13:49.650 [INFO][5120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" HandleID="k8s-pod-network.29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" Sep 12 17:13:49.724567 containerd[2020]: 2025-09-12 17:13:49.654 [INFO][5120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:49.724567 containerd[2020]: 2025-09-12 17:13:49.658 [INFO][5120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:49.724567 containerd[2020]: 2025-09-12 17:13:49.694 [WARNING][5120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" HandleID="k8s-pod-network.29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" Sep 12 17:13:49.724567 containerd[2020]: 2025-09-12 17:13:49.694 [INFO][5120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" HandleID="k8s-pod-network.29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" Sep 12 17:13:49.724567 containerd[2020]: 2025-09-12 17:13:49.698 [INFO][5120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:49.724567 containerd[2020]: 2025-09-12 17:13:49.714 [INFO][5094] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Sep 12 17:13:49.730724 containerd[2020]: time="2025-09-12T17:13:49.727150692Z" level=info msg="TearDown network for sandbox \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\" successfully" Sep 12 17:13:49.730724 containerd[2020]: time="2025-09-12T17:13:49.727207992Z" level=info msg="StopPodSandbox for \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\" returns successfully" Sep 12 17:13:49.734082 systemd[1]: run-netns-cni\x2d0a9324c7\x2deee3\x2df907\x2d0b73\x2ded45316ccaf4.mount: Deactivated successfully. Sep 12 17:13:49.734890 containerd[2020]: time="2025-09-12T17:13:49.734809788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d9c6cf68-z4b6c,Uid:379e7da0-b94f-4766-ab46-ea563b6dd6ae,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:13:49.816524 containerd[2020]: 2025-09-12 17:13:49.378 [INFO][5082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Sep 12 17:13:49.816524 containerd[2020]: 2025-09-12 17:13:49.379 [INFO][5082] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" iface="eth0" netns="/var/run/netns/cni-693990d8-6155-835e-e547-ec14a29082c1" Sep 12 17:13:49.816524 containerd[2020]: 2025-09-12 17:13:49.380 [INFO][5082] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" iface="eth0" netns="/var/run/netns/cni-693990d8-6155-835e-e547-ec14a29082c1" Sep 12 17:13:49.816524 containerd[2020]: 2025-09-12 17:13:49.382 [INFO][5082] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" iface="eth0" netns="/var/run/netns/cni-693990d8-6155-835e-e547-ec14a29082c1" Sep 12 17:13:49.816524 containerd[2020]: 2025-09-12 17:13:49.382 [INFO][5082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Sep 12 17:13:49.816524 containerd[2020]: 2025-09-12 17:13:49.382 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Sep 12 17:13:49.816524 containerd[2020]: 2025-09-12 17:13:49.684 [INFO][5108] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" HandleID="k8s-pod-network.89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" Sep 12 17:13:49.816524 containerd[2020]: 2025-09-12 17:13:49.684 [INFO][5108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:49.816524 containerd[2020]: 2025-09-12 17:13:49.699 [INFO][5108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:49.816524 containerd[2020]: 2025-09-12 17:13:49.729 [WARNING][5108] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" HandleID="k8s-pod-network.89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" Sep 12 17:13:49.816524 containerd[2020]: 2025-09-12 17:13:49.729 [INFO][5108] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" HandleID="k8s-pod-network.89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" Sep 12 17:13:49.816524 containerd[2020]: 2025-09-12 17:13:49.744 [INFO][5108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:49.816524 containerd[2020]: 2025-09-12 17:13:49.780 [INFO][5082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Sep 12 17:13:49.818492 containerd[2020]: time="2025-09-12T17:13:49.818207881Z" level=info msg="TearDown network for sandbox \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\" successfully" Sep 12 17:13:49.818492 containerd[2020]: time="2025-09-12T17:13:49.818249437Z" level=info msg="StopPodSandbox for \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\" returns successfully" Sep 12 17:13:49.827786 containerd[2020]: time="2025-09-12T17:13:49.826774981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkf69,Uid:cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17,Namespace:kube-system,Attempt:1,}" Sep 12 17:13:49.830244 systemd[1]: run-netns-cni\x2d693990d8\x2d6155\x2d835e\x2de547\x2dec14a29082c1.mount: Deactivated successfully. Sep 12 17:13:49.840442 systemd-networkd[1934]: calid8c8efd4f4b: Gained IPv6LL Sep 12 17:13:50.038940 containerd[2020]: time="2025-09-12T17:13:50.036487594Z" level=info msg="StopPodSandbox for \"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf\"" Sep 12 17:13:50.052660 containerd[2020]: time="2025-09-12T17:13:50.052586470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:50.068207 containerd[2020]: time="2025-09-12T17:13:50.066014194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 17:13:50.077361 containerd[2020]: time="2025-09-12T17:13:50.077282938Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:50.118504 containerd[2020]: time="2025-09-12T17:13:50.118060282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:50.157918 containerd[2020]: time="2025-09-12T17:13:50.157814242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 3.434060537s" Sep 12 17:13:50.157918 containerd[2020]: time="2025-09-12T17:13:50.157902802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 17:13:50.167714 containerd[2020]: time="2025-09-12T17:13:50.167659282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:13:50.194631 containerd[2020]: time="2025-09-12T17:13:50.194058406Z" level=info msg="CreateContainer within sandbox \"9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:13:50.270318 containerd[2020]: time="2025-09-12T17:13:50.269800247Z" level=info msg="CreateContainer within sandbox \"9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8a67ce2f9805d2962caefb6b1de229315994f015572ed93c835e8a7a130937e4\"" Sep 12 17:13:50.271022 containerd[2020]: time="2025-09-12T17:13:50.270924935Z" level=info msg="StartContainer for \"8a67ce2f9805d2962caefb6b1de229315994f015572ed93c835e8a7a130937e4\"" Sep 12 17:13:50.415718 systemd-networkd[1934]: vxlan.calico: Gained IPv6LL Sep 12 17:13:50.467967 systemd[1]: Started cri-containerd-8a67ce2f9805d2962caefb6b1de229315994f015572ed93c835e8a7a130937e4.scope - libcontainer container 8a67ce2f9805d2962caefb6b1de229315994f015572ed93c835e8a7a130937e4. Sep 12 17:13:50.586308 systemd-networkd[1934]: cali966798fcb58: Link UP Sep 12 17:13:50.589226 systemd-networkd[1934]: cali966798fcb58: Gained carrier Sep 12 17:13:50.633644 containerd[2020]: 2025-09-12 17:13:50.342 [INFO][5201] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf" Sep 12 17:13:50.633644 containerd[2020]: 2025-09-12 17:13:50.347 [INFO][5201] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf" iface="eth0" netns="/var/run/netns/cni-23d95808-f3c3-e4c7-d428-f836ed4e414e" Sep 12 17:13:50.633644 containerd[2020]: 2025-09-12 17:13:50.348 [INFO][5201] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf" iface="eth0" netns="/var/run/netns/cni-23d95808-f3c3-e4c7-d428-f836ed4e414e" Sep 12 17:13:50.633644 containerd[2020]: 2025-09-12 17:13:50.350 [INFO][5201] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf" iface="eth0" netns="/var/run/netns/cni-23d95808-f3c3-e4c7-d428-f836ed4e414e" Sep 12 17:13:50.633644 containerd[2020]: 2025-09-12 17:13:50.351 [INFO][5201] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf" Sep 12 17:13:50.633644 containerd[2020]: 2025-09-12 17:13:50.351 [INFO][5201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf" Sep 12 17:13:50.633644 containerd[2020]: 2025-09-12 17:13:50.514 [INFO][5232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf" HandleID="k8s-pod-network.0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-eth0" Sep 12 17:13:50.633644 containerd[2020]: 2025-09-12 17:13:50.514 [INFO][5232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:50.633644 containerd[2020]: 2025-09-12 17:13:50.574 [INFO][5232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:50.633644 containerd[2020]: 2025-09-12 17:13:50.603 [WARNING][5232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf" HandleID="k8s-pod-network.0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-eth0" Sep 12 17:13:50.633644 containerd[2020]: 2025-09-12 17:13:50.603 [INFO][5232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf" HandleID="k8s-pod-network.0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-eth0" Sep 12 17:13:50.633644 containerd[2020]: 2025-09-12 17:13:50.607 [INFO][5232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:50.633644 containerd[2020]: 2025-09-12 17:13:50.628 [INFO][5201] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf" Sep 12 17:13:50.639072 containerd[2020]: time="2025-09-12T17:13:50.637996561Z" level=info msg="TearDown network for sandbox \"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf\" successfully" Sep 12 17:13:50.639072 containerd[2020]: time="2025-09-12T17:13:50.638047753Z" level=info msg="StopPodSandbox for \"0fb421d04c72ceea00acd76bb4efd6289b41e2de60788f118f7b8a51afbfbdcf\" returns successfully" Sep 12 17:13:50.642308 containerd[2020]: time="2025-09-12T17:13:50.641875597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d9c6cf68-vs5cz,Uid:bb29a4ce-08f1-4876-8ac8-6366015c6f10,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.044 [INFO][5141] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0 calico-kube-controllers-56c7c98bd9- calico-system 20d4d8f5-a479-45e2-a048-aea30bb1c7f9 1002 0 2025-09-12 17:13:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56c7c98bd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-188 calico-kube-controllers-56c7c98bd9-hsxmb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali966798fcb58 [] [] }} ContainerID="3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" Namespace="calico-system" Pod="calico-kube-controllers-56c7c98bd9-hsxmb" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-" Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.046 [INFO][5141] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" Namespace="calico-system" Pod="calico-kube-controllers-56c7c98bd9-hsxmb" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.412 [INFO][5204] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" HandleID="k8s-pod-network.3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" Workload="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.412 [INFO][5204] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" HandleID="k8s-pod-network.3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" Workload="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103210), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-188", "pod":"calico-kube-controllers-56c7c98bd9-hsxmb", "timestamp":"2025-09-12 17:13:50.411984816 +0000 UTC"}, Hostname:"ip-172-31-30-188", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.412 [INFO][5204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.412 [INFO][5204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.412 [INFO][5204] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-188' Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.474 [INFO][5204] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" host="ip-172-31-30-188" Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.493 [INFO][5204] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-188" Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.511 [INFO][5204] ipam/ipam.go 511: Trying affinity for 192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.515 [INFO][5204] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.524 [INFO][5204] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.525 [INFO][5204] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" host="ip-172-31-30-188" Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.531 [INFO][5204] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30 Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.554 [INFO][5204] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" host="ip-172-31-30-188" Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.573 [INFO][5204] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.3/26] block=192.168.103.0/26 handle="k8s-pod-network.3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" host="ip-172-31-30-188" Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.574 [INFO][5204] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.3/26] handle="k8s-pod-network.3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" host="ip-172-31-30-188" Sep 12 17:13:50.649272 containerd[2020]: 2025-09-12 17:13:50.575 [INFO][5204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:50.651121 containerd[2020]: 2025-09-12 17:13:50.575 [INFO][5204] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.3/26] IPv6=[] ContainerID="3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" HandleID="k8s-pod-network.3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" Workload="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" Sep 12 17:13:50.651121 containerd[2020]: 2025-09-12 17:13:50.580 [INFO][5141] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" Namespace="calico-system" Pod="calico-kube-controllers-56c7c98bd9-hsxmb" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0", GenerateName:"calico-kube-controllers-56c7c98bd9-", Namespace:"calico-system", SelfLink:"", UID:"20d4d8f5-a479-45e2-a048-aea30bb1c7f9", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56c7c98bd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"", Pod:"calico-kube-controllers-56c7c98bd9-hsxmb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali966798fcb58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:50.651121 containerd[2020]: 2025-09-12 17:13:50.580 [INFO][5141] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.3/32] ContainerID="3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" Namespace="calico-system" Pod="calico-kube-controllers-56c7c98bd9-hsxmb" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" Sep 12 17:13:50.651121 containerd[2020]: 2025-09-12 17:13:50.580 [INFO][5141] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali966798fcb58 ContainerID="3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" Namespace="calico-system" Pod="calico-kube-controllers-56c7c98bd9-hsxmb" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" Sep 12 17:13:50.651121 containerd[2020]: 2025-09-12 17:13:50.591 [INFO][5141] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" Namespace="calico-system" Pod="calico-kube-controllers-56c7c98bd9-hsxmb" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" Sep 12 17:13:50.652895 containerd[2020]: 2025-09-12 17:13:50.593 [INFO][5141] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" Namespace="calico-system" Pod="calico-kube-controllers-56c7c98bd9-hsxmb" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0", GenerateName:"calico-kube-controllers-56c7c98bd9-", Namespace:"calico-system", SelfLink:"", UID:"20d4d8f5-a479-45e2-a048-aea30bb1c7f9", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56c7c98bd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30", Pod:"calico-kube-controllers-56c7c98bd9-hsxmb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali966798fcb58", MAC:"96:b3:1d:6c:f4:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:50.652895 containerd[2020]: 2025-09-12 17:13:50.634 [INFO][5141] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30" Namespace="calico-system" Pod="calico-kube-controllers-56c7c98bd9-hsxmb" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" Sep 12 17:13:50.699523 systemd[1]: run-netns-cni\x2d23d95808\x2df3c3\x2de4c7\x2dd428\x2df836ed4e414e.mount: Deactivated successfully. Sep 12 17:13:50.781230 containerd[2020]: time="2025-09-12T17:13:50.781157497Z" level=info msg="StartContainer for \"8a67ce2f9805d2962caefb6b1de229315994f015572ed93c835e8a7a130937e4\" returns successfully" Sep 12 17:13:50.878921 containerd[2020]: time="2025-09-12T17:13:50.845218046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:13:50.878921 containerd[2020]: time="2025-09-12T17:13:50.845636390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:13:50.878921 containerd[2020]: time="2025-09-12T17:13:50.845705246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:50.878921 containerd[2020]: time="2025-09-12T17:13:50.846252614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:50.905809 systemd-networkd[1934]: calieb2e3f75923: Link UP Sep 12 17:13:50.906383 systemd-networkd[1934]: calieb2e3f75923: Gained carrier Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.302 [INFO][5159] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0 calico-apiserver-55d9c6cf68- calico-apiserver 379e7da0-b94f-4766-ab46-ea563b6dd6ae 1004 0 2025-09-12 17:13:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55d9c6cf68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-188 calico-apiserver-55d9c6cf68-z4b6c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieb2e3f75923 [] [] }} ContainerID="195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" Namespace="calico-apiserver" Pod="calico-apiserver-55d9c6cf68-z4b6c" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-" Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.305 [INFO][5159] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" Namespace="calico-apiserver" Pod="calico-apiserver-55d9c6cf68-z4b6c" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.539 [INFO][5230] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" HandleID="k8s-pod-network.195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.545 [INFO][5230] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" HandleID="k8s-pod-network.195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000321840), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-188", "pod":"calico-apiserver-55d9c6cf68-z4b6c", "timestamp":"2025-09-12 17:13:50.53929392 +0000 UTC"}, Hostname:"ip-172-31-30-188", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.546 [INFO][5230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.609 [INFO][5230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.609 [INFO][5230] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-188' Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.662 [INFO][5230] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" host="ip-172-31-30-188" Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.690 [INFO][5230] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-188" Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.751 [INFO][5230] ipam/ipam.go 511: Trying affinity for 192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.757 [INFO][5230] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.771 [INFO][5230] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.771 [INFO][5230] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" host="ip-172-31-30-188" Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.777 [INFO][5230] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71 Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.805 [INFO][5230] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" host="ip-172-31-30-188" Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.829 [INFO][5230] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.4/26] block=192.168.103.0/26 handle="k8s-pod-network.195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" host="ip-172-31-30-188" Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.830 [INFO][5230] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.4/26] handle="k8s-pod-network.195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" host="ip-172-31-30-188" Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.830 [INFO][5230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:50.971809 containerd[2020]: 2025-09-12 17:13:50.830 [INFO][5230] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.4/26] IPv6=[] ContainerID="195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" HandleID="k8s-pod-network.195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" Sep 12 17:13:50.973052 containerd[2020]: 2025-09-12 17:13:50.867 [INFO][5159] cni-plugin/k8s.go 418: Populated endpoint ContainerID="195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" Namespace="calico-apiserver" Pod="calico-apiserver-55d9c6cf68-z4b6c" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0", GenerateName:"calico-apiserver-55d9c6cf68-", Namespace:"calico-apiserver", SelfLink:"", UID:"379e7da0-b94f-4766-ab46-ea563b6dd6ae", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d9c6cf68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"", Pod:"calico-apiserver-55d9c6cf68-z4b6c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb2e3f75923", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:50.973052 containerd[2020]: 2025-09-12 17:13:50.868 [INFO][5159] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.4/32] ContainerID="195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" Namespace="calico-apiserver" Pod="calico-apiserver-55d9c6cf68-z4b6c" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" Sep 12 17:13:50.973052 containerd[2020]: 2025-09-12 17:13:50.868 [INFO][5159] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb2e3f75923 ContainerID="195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" Namespace="calico-apiserver" Pod="calico-apiserver-55d9c6cf68-z4b6c" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" Sep 12 17:13:50.973052 containerd[2020]: 2025-09-12 17:13:50.906 [INFO][5159] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" Namespace="calico-apiserver" Pod="calico-apiserver-55d9c6cf68-z4b6c" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" Sep 12 17:13:50.973052 containerd[2020]: 2025-09-12 17:13:50.911 [INFO][5159] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" Namespace="calico-apiserver" Pod="calico-apiserver-55d9c6cf68-z4b6c" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0", GenerateName:"calico-apiserver-55d9c6cf68-", Namespace:"calico-apiserver", SelfLink:"", UID:"379e7da0-b94f-4766-ab46-ea563b6dd6ae", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d9c6cf68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71", Pod:"calico-apiserver-55d9c6cf68-z4b6c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb2e3f75923", MAC:"ba:de:d9:44:10:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:50.973052 containerd[2020]: 2025-09-12 17:13:50.947 [INFO][5159] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71" Namespace="calico-apiserver" Pod="calico-apiserver-55d9c6cf68-z4b6c" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" Sep 12 17:13:50.998132 systemd[1]: Started cri-containerd-3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30.scope - libcontainer container 3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30. Sep 12 17:13:51.036430 containerd[2020]: time="2025-09-12T17:13:51.036378935Z" level=info msg="StopPodSandbox for \"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f\"" Sep 12 17:13:51.154803 systemd-networkd[1934]: cali364c81f2984: Link UP Sep 12 17:13:51.169186 systemd-networkd[1934]: cali364c81f2984: Gained carrier Sep 12 17:13:51.211675 containerd[2020]: time="2025-09-12T17:13:51.210177767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:13:51.218530 containerd[2020]: time="2025-09-12T17:13:51.217885164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:13:51.218530 containerd[2020]: time="2025-09-12T17:13:51.217947192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:51.218530 containerd[2020]: time="2025-09-12T17:13:51.218129640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:50.384 [INFO][5172] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0 coredns-674b8bbfcf- kube-system cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17 1003 0 2025-09-12 17:12:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-188 coredns-674b8bbfcf-jkf69 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali364c81f2984 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkf69" WorkloadEndpoint="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-" Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:50.385 [INFO][5172] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkf69" WorkloadEndpoint="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:50.569 [INFO][5252] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" HandleID="k8s-pod-network.42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:50.570 [INFO][5252] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" HandleID="k8s-pod-network.42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c930), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-188", "pod":"coredns-674b8bbfcf-jkf69", "timestamp":"2025-09-12 17:13:50.569347392 +0000 UTC"}, Hostname:"ip-172-31-30-188", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:50.570 [INFO][5252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:50.831 [INFO][5252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:50.831 [INFO][5252] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-188' Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:50.903 [INFO][5252] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" host="ip-172-31-30-188" Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:50.938 [INFO][5252] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-188" Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:50.978 [INFO][5252] ipam/ipam.go 511: Trying affinity for 192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:50.995 [INFO][5252] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:51.004 [INFO][5252] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:51.004 [INFO][5252] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" host="ip-172-31-30-188" Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:51.015 [INFO][5252] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4 Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:51.037 [INFO][5252] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" host="ip-172-31-30-188" Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:51.064 [INFO][5252] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.5/26] block=192.168.103.0/26 handle="k8s-pod-network.42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" host="ip-172-31-30-188" Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:51.065 [INFO][5252] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.5/26] handle="k8s-pod-network.42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" host="ip-172-31-30-188" Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:51.065 [INFO][5252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:51.231147 containerd[2020]: 2025-09-12 17:13:51.066 [INFO][5252] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.5/26] IPv6=[] ContainerID="42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" HandleID="k8s-pod-network.42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" Sep 12 17:13:51.232487 containerd[2020]: 2025-09-12 17:13:51.112 [INFO][5172] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkf69" WorkloadEndpoint="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 12, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"", Pod:"coredns-674b8bbfcf-jkf69", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali364c81f2984", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:51.232487 containerd[2020]: 2025-09-12 17:13:51.116 [INFO][5172] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.5/32] ContainerID="42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkf69" WorkloadEndpoint="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" Sep 12 17:13:51.232487 containerd[2020]: 2025-09-12 17:13:51.118 [INFO][5172] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali364c81f2984 ContainerID="42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkf69" WorkloadEndpoint="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" Sep 12 17:13:51.232487 containerd[2020]: 2025-09-12 17:13:51.176 [INFO][5172] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkf69" WorkloadEndpoint="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" Sep 12 17:13:51.233058 containerd[2020]: 2025-09-12 17:13:51.178 [INFO][5172] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkf69" WorkloadEndpoint="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 12, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4", Pod:"coredns-674b8bbfcf-jkf69", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali364c81f2984", MAC:"e2:7b:2e:32:e3:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:51.233058 containerd[2020]: 2025-09-12 17:13:51.215 [INFO][5172] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkf69" WorkloadEndpoint="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" Sep 12 17:13:51.315160 systemd[1]: Started cri-containerd-195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71.scope - libcontainer container 195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71. Sep 12 17:13:51.413084 containerd[2020]: time="2025-09-12T17:13:51.413010300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56c7c98bd9-hsxmb,Uid:20d4d8f5-a479-45e2-a048-aea30bb1c7f9,Namespace:calico-system,Attempt:1,} returns sandbox id \"3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30\"" Sep 12 17:13:51.423534 containerd[2020]: time="2025-09-12T17:13:51.422967577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:13:51.423534 containerd[2020]: time="2025-09-12T17:13:51.423074377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:13:51.423534 containerd[2020]: time="2025-09-12T17:13:51.423101473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:51.429078 containerd[2020]: time="2025-09-12T17:13:51.427729189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:51.478495 systemd[1]: Started cri-containerd-42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4.scope - libcontainer container 42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4. Sep 12 17:13:51.634683 containerd[2020]: time="2025-09-12T17:13:51.634065662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d9c6cf68-z4b6c,Uid:379e7da0-b94f-4766-ab46-ea563b6dd6ae,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71\"" Sep 12 17:13:51.652839 containerd[2020]: time="2025-09-12T17:13:51.652560842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkf69,Uid:cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17,Namespace:kube-system,Attempt:1,} returns sandbox id \"42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4\"" Sep 12 17:13:51.667798 containerd[2020]: time="2025-09-12T17:13:51.667723286Z" level=info msg="CreateContainer within sandbox \"42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:13:51.729614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3135777376.mount: Deactivated successfully. Sep 12 17:13:51.765503 systemd-networkd[1934]: calie83c58df7fe: Link UP Sep 12 17:13:51.771917 systemd-networkd[1934]: calie83c58df7fe: Gained carrier Sep 12 17:13:51.777973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1535980492.mount: Deactivated successfully. Sep 12 17:13:51.800673 containerd[2020]: time="2025-09-12T17:13:51.800362478Z" level=info msg="CreateContainer within sandbox \"42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b93d3a6f112ade3678aeadb3ad814d2913bdea39584f1bc803ada1bd6b326933\"" Sep 12 17:13:51.804130 containerd[2020]: time="2025-09-12T17:13:51.803941166Z" level=info msg="StartContainer for \"b93d3a6f112ade3678aeadb3ad814d2913bdea39584f1bc803ada1bd6b326933\"" Sep 12 17:13:51.809631 containerd[2020]: 2025-09-12 17:13:51.483 [INFO][5361] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f" Sep 12 17:13:51.809631 containerd[2020]: 2025-09-12 17:13:51.487 [INFO][5361] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f" iface="eth0" netns="/var/run/netns/cni-cca676a1-662e-c3ff-c55f-6b3d2b87903d" Sep 12 17:13:51.809631 containerd[2020]: 2025-09-12 17:13:51.488 [INFO][5361] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f" iface="eth0" netns="/var/run/netns/cni-cca676a1-662e-c3ff-c55f-6b3d2b87903d" Sep 12 17:13:51.809631 containerd[2020]: 2025-09-12 17:13:51.490 [INFO][5361] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f" iface="eth0" netns="/var/run/netns/cni-cca676a1-662e-c3ff-c55f-6b3d2b87903d" Sep 12 17:13:51.809631 containerd[2020]: 2025-09-12 17:13:51.492 [INFO][5361] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f" Sep 12 17:13:51.809631 containerd[2020]: 2025-09-12 17:13:51.492 [INFO][5361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f" Sep 12 17:13:51.809631 containerd[2020]: 2025-09-12 17:13:51.692 [INFO][5456] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f" HandleID="k8s-pod-network.8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f" Workload="ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-eth0" Sep 12 17:13:51.809631 containerd[2020]: 2025-09-12 17:13:51.692 [INFO][5456] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:51.809631 containerd[2020]: 2025-09-12 17:13:51.705 [INFO][5456] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:51.809631 containerd[2020]: 2025-09-12 17:13:51.749 [WARNING][5456] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f" HandleID="k8s-pod-network.8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f" Workload="ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-eth0" Sep 12 17:13:51.809631 containerd[2020]: 2025-09-12 17:13:51.749 [INFO][5456] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f" HandleID="k8s-pod-network.8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f" Workload="ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-eth0" Sep 12 17:13:51.809631 containerd[2020]: 2025-09-12 17:13:51.759 [INFO][5456] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:51.809631 containerd[2020]: 2025-09-12 17:13:51.793 [INFO][5361] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f" Sep 12 17:13:51.811920 containerd[2020]: time="2025-09-12T17:13:51.810069398Z" level=info msg="TearDown network for sandbox \"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f\" successfully" Sep 12 17:13:51.811920 containerd[2020]: time="2025-09-12T17:13:51.810126482Z" level=info msg="StopPodSandbox for \"8e64a0fe09a860e07350ede06400277e06b68c453fbb507de38d0c7de5b7067f\" returns successfully" Sep 12 17:13:51.820154 containerd[2020]: time="2025-09-12T17:13:51.816264710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-28q4g,Uid:fbef718a-5d24-4aa3-8e46-22c641d6b1fe,Namespace:calico-system,Attempt:1,}" Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.097 [INFO][5294] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-eth0 calico-apiserver-55d9c6cf68- calico-apiserver bb29a4ce-08f1-4876-8ac8-6366015c6f10 1013 0 2025-09-12 17:13:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55d9c6cf68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-188 calico-apiserver-55d9c6cf68-vs5cz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie83c58df7fe [] [] }} ContainerID="b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" Namespace="calico-apiserver" Pod="calico-apiserver-55d9c6cf68-vs5cz" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-" Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.098 [INFO][5294] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" Namespace="calico-apiserver" Pod="calico-apiserver-55d9c6cf68-vs5cz" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-eth0" Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.490 [INFO][5373] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" HandleID="k8s-pod-network.b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-eth0" Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.492 [INFO][5373] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" HandleID="k8s-pod-network.b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dcc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-188", "pod":"calico-apiserver-55d9c6cf68-vs5cz", "timestamp":"2025-09-12 17:13:51.489385597 +0000 UTC"}, Hostname:"ip-172-31-30-188", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.494 [INFO][5373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.494 [INFO][5373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.495 [INFO][5373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-188' Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.556 [INFO][5373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" host="ip-172-31-30-188" Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.580 [INFO][5373] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-188" Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.605 [INFO][5373] ipam/ipam.go 511: Trying affinity for 192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.620 [INFO][5373] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.634 [INFO][5373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.636 [INFO][5373] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" host="ip-172-31-30-188" Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.641 [INFO][5373] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73 Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.670 [INFO][5373] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" host="ip-172-31-30-188" Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.699 [INFO][5373] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.6/26] block=192.168.103.0/26 handle="k8s-pod-network.b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" host="ip-172-31-30-188" Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.702 [INFO][5373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.6/26] handle="k8s-pod-network.b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" host="ip-172-31-30-188" Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.704 [INFO][5373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:51.846826 containerd[2020]: 2025-09-12 17:13:51.705 [INFO][5373] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.6/26] IPv6=[] ContainerID="b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" HandleID="k8s-pod-network.b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-eth0" Sep 12 17:13:51.849388 containerd[2020]: 2025-09-12 17:13:51.726 [INFO][5294] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" Namespace="calico-apiserver" Pod="calico-apiserver-55d9c6cf68-vs5cz" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-eth0", GenerateName:"calico-apiserver-55d9c6cf68-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb29a4ce-08f1-4876-8ac8-6366015c6f10", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d9c6cf68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"", Pod:"calico-apiserver-55d9c6cf68-vs5cz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie83c58df7fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:51.849388 containerd[2020]: 2025-09-12 17:13:51.732 [INFO][5294] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.6/32] ContainerID="b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" Namespace="calico-apiserver" Pod="calico-apiserver-55d9c6cf68-vs5cz" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-eth0" Sep 12 17:13:51.849388 containerd[2020]: 2025-09-12 17:13:51.732 [INFO][5294] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie83c58df7fe ContainerID="b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" Namespace="calico-apiserver" Pod="calico-apiserver-55d9c6cf68-vs5cz" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-eth0" Sep 12 17:13:51.849388 containerd[2020]: 2025-09-12 17:13:51.779 [INFO][5294] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" Namespace="calico-apiserver" Pod="calico-apiserver-55d9c6cf68-vs5cz" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-eth0" Sep 12 17:13:51.849388 containerd[2020]: 2025-09-12 17:13:51.781 [INFO][5294] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" Namespace="calico-apiserver" Pod="calico-apiserver-55d9c6cf68-vs5cz" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-eth0", GenerateName:"calico-apiserver-55d9c6cf68-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb29a4ce-08f1-4876-8ac8-6366015c6f10", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d9c6cf68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73", Pod:"calico-apiserver-55d9c6cf68-vs5cz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie83c58df7fe", MAC:"7e:03:d2:9d:d2:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:51.849388 containerd[2020]: 2025-09-12 17:13:51.822 [INFO][5294] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73" Namespace="calico-apiserver" Pod="calico-apiserver-55d9c6cf68-vs5cz" WorkloadEndpoint="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--vs5cz-eth0" Sep 12 17:13:51.936447 systemd[1]: Started cri-containerd-b93d3a6f112ade3678aeadb3ad814d2913bdea39584f1bc803ada1bd6b326933.scope - libcontainer container b93d3a6f112ade3678aeadb3ad814d2913bdea39584f1bc803ada1bd6b326933. Sep 12 17:13:52.026283 containerd[2020]: time="2025-09-12T17:13:52.026101560Z" level=info msg="StopPodSandbox for \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\"" Sep 12 17:13:52.059056 containerd[2020]: time="2025-09-12T17:13:52.058852176Z" level=info msg="StopPodSandbox for \"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\"" Sep 12 17:13:52.071672 containerd[2020]: time="2025-09-12T17:13:52.068203176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:13:52.071672 containerd[2020]: time="2025-09-12T17:13:52.068294340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:13:52.071672 containerd[2020]: time="2025-09-12T17:13:52.068321388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:52.071672 containerd[2020]: time="2025-09-12T17:13:52.068479800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:52.108130 containerd[2020]: time="2025-09-12T17:13:52.107787300Z" level=info msg="StartContainer for \"b93d3a6f112ade3678aeadb3ad814d2913bdea39584f1bc803ada1bd6b326933\" returns successfully" Sep 12 17:13:52.243503 systemd[1]: Started cri-containerd-b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73.scope - libcontainer container b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73. Sep 12 17:13:52.541659 containerd[2020]: time="2025-09-12T17:13:52.540850766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d9c6cf68-vs5cz,Uid:bb29a4ce-08f1-4876-8ac8-6366015c6f10,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73\"" Sep 12 17:13:52.657420 systemd-networkd[1934]: cali966798fcb58: Gained IPv6LL Sep 12 17:13:52.701730 systemd[1]: run-netns-cni\x2dcca676a1\x2d662e\x2dc3ff\x2dc55f\x2d6b3d2b87903d.mount: Deactivated successfully. Sep 12 17:13:52.858799 systemd-networkd[1934]: calieb2e3f75923: Gained IPv6LL Sep 12 17:13:52.880021 systemd-networkd[1934]: cali1738b3d4f5c: Link UP Sep 12 17:13:52.891857 systemd-networkd[1934]: cali1738b3d4f5c: Gained carrier Sep 12 17:13:52.965223 kubelet[3436]: I0912 17:13:52.964920 3436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jkf69" podStartSLOduration=54.96488842 podStartE2EDuration="54.96488842s" podCreationTimestamp="2025-09-12 17:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:13:52.87035818 +0000 UTC m=+61.083441593" watchObservedRunningTime="2025-09-12 17:13:52.96488842 +0000 UTC m=+61.177971845" Sep 12 17:13:52.969105 containerd[2020]: 2025-09-12 17:13:52.408 [WARNING][5563] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 12, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4", Pod:"coredns-674b8bbfcf-jkf69", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali364c81f2984", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:52.969105 containerd[2020]: 2025-09-12 17:13:52.418 [INFO][5563] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Sep 12 17:13:52.969105 containerd[2020]: 2025-09-12 17:13:52.418 [INFO][5563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" iface="eth0" netns="" Sep 12 17:13:52.969105 containerd[2020]: 2025-09-12 17:13:52.418 [INFO][5563] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Sep 12 17:13:52.969105 containerd[2020]: 2025-09-12 17:13:52.418 [INFO][5563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Sep 12 17:13:52.969105 containerd[2020]: 2025-09-12 17:13:52.808 [INFO][5608] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" HandleID="k8s-pod-network.89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" Sep 12 17:13:52.969105 containerd[2020]: 2025-09-12 17:13:52.812 [INFO][5608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:52.969105 containerd[2020]: 2025-09-12 17:13:52.826 [INFO][5608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:52.969105 containerd[2020]: 2025-09-12 17:13:52.913 [WARNING][5608] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" HandleID="k8s-pod-network.89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" Sep 12 17:13:52.969105 containerd[2020]: 2025-09-12 17:13:52.913 [INFO][5608] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" HandleID="k8s-pod-network.89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" Sep 12 17:13:52.969105 containerd[2020]: 2025-09-12 17:13:52.934 [INFO][5608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:52.969105 containerd[2020]: 2025-09-12 17:13:52.953 [INFO][5563] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Sep 12 17:13:52.969105 containerd[2020]: time="2025-09-12T17:13:52.968539036Z" level=info msg="TearDown network for sandbox \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\" successfully" Sep 12 17:13:52.969105 containerd[2020]: time="2025-09-12T17:13:52.968579848Z" level=info msg="StopPodSandbox for \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\" returns successfully" Sep 12 17:13:52.977454 containerd[2020]: time="2025-09-12T17:13:52.977020552Z" level=info msg="RemovePodSandbox for \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\"" Sep 12 17:13:52.977454 containerd[2020]: time="2025-09-12T17:13:52.977126188Z" level=info msg="Forcibly stopping sandbox \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\"" Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.189 [INFO][5502] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-eth0 goldmane-54d579b49d- calico-system fbef718a-5d24-4aa3-8e46-22c641d6b1fe 1030 0 2025-09-12 17:13:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-30-188 goldmane-54d579b49d-28q4g eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1738b3d4f5c [] [] }} ContainerID="42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" Namespace="calico-system" Pod="goldmane-54d579b49d-28q4g" WorkloadEndpoint="ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-" Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.190 [INFO][5502] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" Namespace="calico-system" Pod="goldmane-54d579b49d-28q4g" WorkloadEndpoint="ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-eth0" Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.596 [INFO][5593] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" HandleID="k8s-pod-network.42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" Workload="ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-eth0" Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.604 [INFO][5593] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" HandleID="k8s-pod-network.42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" Workload="ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000356350), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-188", "pod":"goldmane-54d579b49d-28q4g", "timestamp":"2025-09-12 17:13:52.596035466 +0000 UTC"}, Hostname:"ip-172-31-30-188", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.606 [INFO][5593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.607 [INFO][5593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.608 [INFO][5593] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-188' Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.667 [INFO][5593] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" host="ip-172-31-30-188" Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.716 [INFO][5593] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-188" Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.731 [INFO][5593] ipam/ipam.go 511: Trying affinity for 192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.736 [INFO][5593] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.747 [INFO][5593] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.747 [INFO][5593] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" host="ip-172-31-30-188" Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.755 [INFO][5593] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7 Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.785 [INFO][5593] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" host="ip-172-31-30-188" Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.821 [INFO][5593] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.7/26] block=192.168.103.0/26 handle="k8s-pod-network.42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" host="ip-172-31-30-188" Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.821 [INFO][5593] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.7/26] handle="k8s-pod-network.42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" host="ip-172-31-30-188" Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.821 [INFO][5593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:52.981688 containerd[2020]: 2025-09-12 17:13:52.821 [INFO][5593] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.7/26] IPv6=[] ContainerID="42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" HandleID="k8s-pod-network.42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" Workload="ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-eth0" Sep 12 17:13:52.986994 containerd[2020]: 2025-09-12 17:13:52.841 [INFO][5502] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" Namespace="calico-system" Pod="goldmane-54d579b49d-28q4g" WorkloadEndpoint="ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"fbef718a-5d24-4aa3-8e46-22c641d6b1fe", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"", Pod:"goldmane-54d579b49d-28q4g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.103.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1738b3d4f5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:52.986994 containerd[2020]: 2025-09-12 17:13:52.841 [INFO][5502] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.7/32] ContainerID="42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" Namespace="calico-system" Pod="goldmane-54d579b49d-28q4g" WorkloadEndpoint="ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-eth0" Sep 12 17:13:52.986994 containerd[2020]: 2025-09-12 17:13:52.841 [INFO][5502] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1738b3d4f5c ContainerID="42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" Namespace="calico-system" Pod="goldmane-54d579b49d-28q4g" WorkloadEndpoint="ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-eth0" Sep 12 17:13:52.986994 containerd[2020]: 2025-09-12 17:13:52.930 [INFO][5502] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" Namespace="calico-system" Pod="goldmane-54d579b49d-28q4g" WorkloadEndpoint="ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-eth0" Sep 12 17:13:52.986994 containerd[2020]: 2025-09-12 17:13:52.931 [INFO][5502] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" Namespace="calico-system" Pod="goldmane-54d579b49d-28q4g" WorkloadEndpoint="ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"fbef718a-5d24-4aa3-8e46-22c641d6b1fe", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7", Pod:"goldmane-54d579b49d-28q4g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.103.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1738b3d4f5c", MAC:"6a:45:ea:95:b0:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:52.986994 containerd[2020]: 2025-09-12 17:13:52.973 [INFO][5502] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7" Namespace="calico-system" Pod="goldmane-54d579b49d-28q4g" WorkloadEndpoint="ip--172--31--30--188-k8s-goldmane--54d579b49d--28q4g-eth0" Sep 12 17:13:53.102924 systemd-networkd[1934]: cali364c81f2984: Gained IPv6LL Sep 12 17:13:53.119430 containerd[2020]: 2025-09-12 17:13:52.632 [INFO][5580] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468" Sep 12 17:13:53.119430 containerd[2020]: 2025-09-12 17:13:52.633 [INFO][5580] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468" iface="eth0" netns="/var/run/netns/cni-5b2f4649-c14e-3fad-fced-a133aa786f16" Sep 12 17:13:53.119430 containerd[2020]: 2025-09-12 17:13:52.635 [INFO][5580] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468" iface="eth0" netns="/var/run/netns/cni-5b2f4649-c14e-3fad-fced-a133aa786f16" Sep 12 17:13:53.119430 containerd[2020]: 2025-09-12 17:13:52.638 [INFO][5580] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468" iface="eth0" netns="/var/run/netns/cni-5b2f4649-c14e-3fad-fced-a133aa786f16" Sep 12 17:13:53.119430 containerd[2020]: 2025-09-12 17:13:52.638 [INFO][5580] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468" Sep 12 17:13:53.119430 containerd[2020]: 2025-09-12 17:13:52.638 [INFO][5580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468" Sep 12 17:13:53.119430 containerd[2020]: 2025-09-12 17:13:53.019 [INFO][5625] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468" HandleID="k8s-pod-network.752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-eth0" Sep 12 17:13:53.119430 containerd[2020]: 2025-09-12 17:13:53.021 [INFO][5625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:53.119430 containerd[2020]: 2025-09-12 17:13:53.023 [INFO][5625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:53.119430 containerd[2020]: 2025-09-12 17:13:53.066 [WARNING][5625] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468" HandleID="k8s-pod-network.752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-eth0" Sep 12 17:13:53.119430 containerd[2020]: 2025-09-12 17:13:53.066 [INFO][5625] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468" HandleID="k8s-pod-network.752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-eth0" Sep 12 17:13:53.119430 containerd[2020]: 2025-09-12 17:13:53.076 [INFO][5625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:53.119430 containerd[2020]: 2025-09-12 17:13:53.093 [INFO][5580] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468" Sep 12 17:13:53.125060 containerd[2020]: time="2025-09-12T17:13:53.120533473Z" level=info msg="TearDown network for sandbox \"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\" successfully" Sep 12 17:13:53.125060 containerd[2020]: time="2025-09-12T17:13:53.120627829Z" level=info msg="StopPodSandbox for \"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\" returns successfully" Sep 12 17:13:53.127085 containerd[2020]: time="2025-09-12T17:13:53.126943033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bj892,Uid:19645dd7-24fd-4986-a5a4-93fb6c895793,Namespace:kube-system,Attempt:1,}" Sep 12 17:13:53.129920 systemd[1]: run-netns-cni\x2d5b2f4649\x2dc14e\x2d3fad\x2dfced\x2da133aa786f16.mount: Deactivated successfully. Sep 12 17:13:53.257767 containerd[2020]: time="2025-09-12T17:13:53.255498806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:13:53.257767 containerd[2020]: time="2025-09-12T17:13:53.255652154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:13:53.257767 containerd[2020]: time="2025-09-12T17:13:53.255695726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:53.257767 containerd[2020]: time="2025-09-12T17:13:53.255888422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:53.369228 containerd[2020]: time="2025-09-12T17:13:53.368630354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:53.373990 systemd[1]: Started cri-containerd-42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7.scope - libcontainer container 42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7. Sep 12 17:13:53.379653 containerd[2020]: time="2025-09-12T17:13:53.378830570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 17:13:53.385845 containerd[2020]: time="2025-09-12T17:13:53.384951518Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:53.414745 containerd[2020]: time="2025-09-12T17:13:53.413648066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:53.422998 containerd[2020]: time="2025-09-12T17:13:53.422695646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 3.253993048s" Sep 12 17:13:53.422998 containerd[2020]: time="2025-09-12T17:13:53.422984042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 17:13:53.436527 containerd[2020]: time="2025-09-12T17:13:53.436279791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:13:53.459179 containerd[2020]: time="2025-09-12T17:13:53.459111795Z" level=info msg="CreateContainer within sandbox \"05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:13:53.487008 systemd-networkd[1934]: calie83c58df7fe: Gained IPv6LL Sep 12 17:13:53.558546 containerd[2020]: 2025-09-12 17:13:53.350 [WARNING][5656] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cd1a85c3-f4b2-49ef-9fc6-6bb14d28eb17", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 12, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"42ccd21950c625931ff8ec0a654cb06a056449ba68b4c0f140eabb97e6f326f4", Pod:"coredns-674b8bbfcf-jkf69", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali364c81f2984", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:53.558546 containerd[2020]: 2025-09-12 17:13:53.352 [INFO][5656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Sep 12 17:13:53.558546 containerd[2020]: 2025-09-12 17:13:53.352 [INFO][5656] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" iface="eth0" netns="" Sep 12 17:13:53.558546 containerd[2020]: 2025-09-12 17:13:53.352 [INFO][5656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Sep 12 17:13:53.558546 containerd[2020]: 2025-09-12 17:13:53.352 [INFO][5656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Sep 12 17:13:53.558546 containerd[2020]: 2025-09-12 17:13:53.488 [INFO][5706] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" HandleID="k8s-pod-network.89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" Sep 12 17:13:53.558546 containerd[2020]: 2025-09-12 17:13:53.489 [INFO][5706] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:53.558546 containerd[2020]: 2025-09-12 17:13:53.489 [INFO][5706] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:53.558546 containerd[2020]: 2025-09-12 17:13:53.527 [WARNING][5706] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" HandleID="k8s-pod-network.89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" Sep 12 17:13:53.558546 containerd[2020]: 2025-09-12 17:13:53.528 [INFO][5706] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" HandleID="k8s-pod-network.89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--jkf69-eth0" Sep 12 17:13:53.558546 containerd[2020]: 2025-09-12 17:13:53.532 [INFO][5706] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:53.558546 containerd[2020]: 2025-09-12 17:13:53.549 [INFO][5656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317" Sep 12 17:13:53.561453 containerd[2020]: time="2025-09-12T17:13:53.560408211Z" level=info msg="TearDown network for sandbox \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\" successfully" Sep 12 17:13:53.570096 containerd[2020]: time="2025-09-12T17:13:53.570015135Z" level=info msg="CreateContainer within sandbox \"05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1ca7d2de60d018dddd872e45303a6cdf52d028111e418df60054f5111a58acaf\"" Sep 12 17:13:53.573723 containerd[2020]: time="2025-09-12T17:13:53.573655455Z" level=info msg="StartContainer for \"1ca7d2de60d018dddd872e45303a6cdf52d028111e418df60054f5111a58acaf\"" Sep 12 17:13:53.574325 containerd[2020]: time="2025-09-12T17:13:53.574273275Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:13:53.574556 containerd[2020]: time="2025-09-12T17:13:53.574524219Z" level=info msg="RemovePodSandbox \"89e2b1bc53d635a4c448f10c5eb0aa6e2dc969e0d968083b6d8c99297aa0c317\" returns successfully" Sep 12 17:13:53.576518 containerd[2020]: time="2025-09-12T17:13:53.576465327Z" level=info msg="StopPodSandbox for \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\"" Sep 12 17:13:53.598672 containerd[2020]: time="2025-09-12T17:13:53.598563603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-28q4g,Uid:fbef718a-5d24-4aa3-8e46-22c641d6b1fe,Namespace:calico-system,Attempt:1,} returns sandbox id \"42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7\"" Sep 12 17:13:53.730325 systemd[1]: Started sshd@11-172.31.30.188:22-147.75.109.163:44894.service - OpenSSH per-connection server daemon (147.75.109.163:44894). Sep 12 17:13:53.751998 systemd[1]: Started cri-containerd-1ca7d2de60d018dddd872e45303a6cdf52d028111e418df60054f5111a58acaf.scope - libcontainer container 1ca7d2de60d018dddd872e45303a6cdf52d028111e418df60054f5111a58acaf. Sep 12 17:13:53.842843 systemd-networkd[1934]: calied69fc64fcc: Link UP Sep 12 17:13:53.843267 systemd-networkd[1934]: calied69fc64fcc: Gained carrier Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.488 [INFO][5677] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-eth0 coredns-674b8bbfcf- kube-system 19645dd7-24fd-4986-a5a4-93fb6c895793 1045 0 2025-09-12 17:12:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-188 coredns-674b8bbfcf-bj892 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calied69fc64fcc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" Namespace="kube-system" Pod="coredns-674b8bbfcf-bj892" WorkloadEndpoint="ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-" Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.495 [INFO][5677] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" Namespace="kube-system" Pod="coredns-674b8bbfcf-bj892" WorkloadEndpoint="ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-eth0" Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.608 [INFO][5722] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" HandleID="k8s-pod-network.9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-eth0" Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.608 [INFO][5722] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" HandleID="k8s-pod-network.9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c330), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-188", "pod":"coredns-674b8bbfcf-bj892", "timestamp":"2025-09-12 17:13:53.607951119 +0000 UTC"}, Hostname:"ip-172-31-30-188", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.608 [INFO][5722] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.608 [INFO][5722] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.609 [INFO][5722] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-188' Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.634 [INFO][5722] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" host="ip-172-31-30-188" Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.649 [INFO][5722] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-188" Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.659 [INFO][5722] ipam/ipam.go 511: Trying affinity for 192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.683 [INFO][5722] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.700 [INFO][5722] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ip-172-31-30-188" Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.700 [INFO][5722] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" host="ip-172-31-30-188" Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.708 [INFO][5722] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.740 [INFO][5722] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" host="ip-172-31-30-188" Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.777 [INFO][5722] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.8/26] block=192.168.103.0/26 handle="k8s-pod-network.9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" host="ip-172-31-30-188" Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.777 [INFO][5722] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.8/26] handle="k8s-pod-network.9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" host="ip-172-31-30-188" Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.777 [INFO][5722] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:53.942092 containerd[2020]: 2025-09-12 17:13:53.779 [INFO][5722] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.8/26] IPv6=[] ContainerID="9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" HandleID="k8s-pod-network.9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" Workload="ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-eth0" Sep 12 17:13:53.945514 containerd[2020]: 2025-09-12 17:13:53.805 [INFO][5677] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" Namespace="kube-system" Pod="coredns-674b8bbfcf-bj892" WorkloadEndpoint="ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"19645dd7-24fd-4986-a5a4-93fb6c895793", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 12, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"", Pod:"coredns-674b8bbfcf-bj892", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied69fc64fcc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:53.945514 containerd[2020]: 2025-09-12 17:13:53.806 [INFO][5677] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.8/32] ContainerID="9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" Namespace="kube-system" Pod="coredns-674b8bbfcf-bj892" WorkloadEndpoint="ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-eth0" Sep 12 17:13:53.945514 containerd[2020]: 2025-09-12 17:13:53.806 [INFO][5677] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied69fc64fcc ContainerID="9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" Namespace="kube-system" Pod="coredns-674b8bbfcf-bj892" WorkloadEndpoint="ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-eth0" Sep 12 17:13:53.945514 containerd[2020]: 2025-09-12 17:13:53.855 [INFO][5677] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" Namespace="kube-system" Pod="coredns-674b8bbfcf-bj892" WorkloadEndpoint="ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-eth0" Sep 12 17:13:53.946457 containerd[2020]: 2025-09-12 17:13:53.861 [INFO][5677] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" Namespace="kube-system" Pod="coredns-674b8bbfcf-bj892" WorkloadEndpoint="ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"19645dd7-24fd-4986-a5a4-93fb6c895793", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 12, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc", Pod:"coredns-674b8bbfcf-bj892", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied69fc64fcc", MAC:"22:16:f0:c7:cb:f4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:53.946457 containerd[2020]: 2025-09-12 17:13:53.916 [INFO][5677] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc" Namespace="kube-system" Pod="coredns-674b8bbfcf-bj892" WorkloadEndpoint="ip--172--31--30--188-k8s-coredns--674b8bbfcf--bj892-eth0" Sep 12 17:13:53.989771 sshd[5767]: Accepted publickey for core from 147.75.109.163 port 44894 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:13:53.995384 sshd[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:13:54.018796 systemd-logind[1992]: New session 12 of user core. Sep 12 17:13:54.026963 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:13:54.042384 containerd[2020]: time="2025-09-12T17:13:54.039999494Z" level=info msg="StartContainer for \"1ca7d2de60d018dddd872e45303a6cdf52d028111e418df60054f5111a58acaf\" returns successfully" Sep 12 17:13:54.103381 containerd[2020]: time="2025-09-12T17:13:54.103139066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:13:54.105007 containerd[2020]: time="2025-09-12T17:13:54.104303582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:13:54.108358 containerd[2020]: time="2025-09-12T17:13:54.105263342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:54.108622 containerd[2020]: time="2025-09-12T17:13:54.107852630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:54.257988 systemd[1]: Started cri-containerd-9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc.scope - libcontainer container 9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc. Sep 12 17:13:54.289419 containerd[2020]: 2025-09-12 17:13:53.908 [WARNING][5749] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0", GenerateName:"calico-kube-controllers-56c7c98bd9-", Namespace:"calico-system", SelfLink:"", UID:"20d4d8f5-a479-45e2-a048-aea30bb1c7f9", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56c7c98bd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30", Pod:"calico-kube-controllers-56c7c98bd9-hsxmb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali966798fcb58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:54.289419 containerd[2020]: 2025-09-12 17:13:53.914 [INFO][5749] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Sep 12 17:13:54.289419 containerd[2020]: 2025-09-12 17:13:53.915 [INFO][5749] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" iface="eth0" netns="" Sep 12 17:13:54.289419 containerd[2020]: 2025-09-12 17:13:53.915 [INFO][5749] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Sep 12 17:13:54.289419 containerd[2020]: 2025-09-12 17:13:53.915 [INFO][5749] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Sep 12 17:13:54.289419 containerd[2020]: 2025-09-12 17:13:54.192 [INFO][5787] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" HandleID="k8s-pod-network.f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Workload="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" Sep 12 17:13:54.289419 containerd[2020]: 2025-09-12 17:13:54.193 [INFO][5787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:54.289419 containerd[2020]: 2025-09-12 17:13:54.194 [INFO][5787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:54.289419 containerd[2020]: 2025-09-12 17:13:54.254 [WARNING][5787] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" HandleID="k8s-pod-network.f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Workload="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" Sep 12 17:13:54.289419 containerd[2020]: 2025-09-12 17:13:54.254 [INFO][5787] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" HandleID="k8s-pod-network.f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Workload="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" Sep 12 17:13:54.289419 containerd[2020]: 2025-09-12 17:13:54.268 [INFO][5787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:54.289419 containerd[2020]: 2025-09-12 17:13:54.278 [INFO][5749] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Sep 12 17:13:54.289419 containerd[2020]: time="2025-09-12T17:13:54.289195155Z" level=info msg="TearDown network for sandbox \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\" successfully" Sep 12 17:13:54.289419 containerd[2020]: time="2025-09-12T17:13:54.289234419Z" level=info msg="StopPodSandbox for \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\" returns successfully" Sep 12 17:13:54.294161 containerd[2020]: time="2025-09-12T17:13:54.290937519Z" level=info msg="RemovePodSandbox for \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\"" Sep 12 17:13:54.294161 containerd[2020]: time="2025-09-12T17:13:54.291396807Z" level=info msg="Forcibly stopping sandbox \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\"" Sep 12 17:13:54.440314 containerd[2020]: time="2025-09-12T17:13:54.440161228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bj892,Uid:19645dd7-24fd-4986-a5a4-93fb6c895793,Namespace:kube-system,Attempt:1,} returns sandbox id \"9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc\"" Sep 12 17:13:54.460997 containerd[2020]: time="2025-09-12T17:13:54.460927576Z" level=info msg="CreateContainer within sandbox \"9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:13:54.522405 sshd[5767]: pam_unix(sshd:session): session closed for user core Sep 12 17:13:54.527375 containerd[2020]: time="2025-09-12T17:13:54.524460088Z" level=info msg="CreateContainer within sandbox \"9f105e561321999e1a50cd3d14f822b918b5be9a9e1c19817b6f03e027c6eafc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bcaa2422c2d9a4584868dee30e1b88b7abc7fba6d899b8d4880bd4ed0507f507\"" Sep 12 17:13:54.531566 containerd[2020]: time="2025-09-12T17:13:54.529959988Z" level=info msg="StartContainer for \"bcaa2422c2d9a4584868dee30e1b88b7abc7fba6d899b8d4880bd4ed0507f507\"" Sep 12 17:13:54.542688 systemd[1]: sshd@11-172.31.30.188:22-147.75.109.163:44894.service: Deactivated successfully. Sep 12 17:13:54.552440 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:13:54.561150 systemd-logind[1992]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:13:54.590203 systemd[1]: Started sshd@12-172.31.30.188:22-147.75.109.163:44908.service - OpenSSH per-connection server daemon (147.75.109.163:44908). Sep 12 17:13:54.592714 systemd-logind[1992]: Removed session 12. Sep 12 17:13:54.647689 systemd[1]: Started cri-containerd-bcaa2422c2d9a4584868dee30e1b88b7abc7fba6d899b8d4880bd4ed0507f507.scope - libcontainer container bcaa2422c2d9a4584868dee30e1b88b7abc7fba6d899b8d4880bd4ed0507f507. Sep 12 17:13:54.704845 systemd-networkd[1934]: cali1738b3d4f5c: Gained IPv6LL Sep 12 17:13:54.740174 containerd[2020]: 2025-09-12 17:13:54.557 [WARNING][5856] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0", GenerateName:"calico-kube-controllers-56c7c98bd9-", Namespace:"calico-system", SelfLink:"", UID:"20d4d8f5-a479-45e2-a048-aea30bb1c7f9", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56c7c98bd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30", Pod:"calico-kube-controllers-56c7c98bd9-hsxmb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali966798fcb58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:54.740174 containerd[2020]: 2025-09-12 17:13:54.557 [INFO][5856] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Sep 12 17:13:54.740174 containerd[2020]: 2025-09-12 17:13:54.557 [INFO][5856] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" iface="eth0" netns="" Sep 12 17:13:54.740174 containerd[2020]: 2025-09-12 17:13:54.558 [INFO][5856] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Sep 12 17:13:54.740174 containerd[2020]: 2025-09-12 17:13:54.558 [INFO][5856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Sep 12 17:13:54.740174 containerd[2020]: 2025-09-12 17:13:54.667 [INFO][5879] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" HandleID="k8s-pod-network.f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Workload="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" Sep 12 17:13:54.740174 containerd[2020]: 2025-09-12 17:13:54.667 [INFO][5879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:54.740174 containerd[2020]: 2025-09-12 17:13:54.668 [INFO][5879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:54.740174 containerd[2020]: 2025-09-12 17:13:54.707 [WARNING][5879] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" HandleID="k8s-pod-network.f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Workload="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" Sep 12 17:13:54.740174 containerd[2020]: 2025-09-12 17:13:54.708 [INFO][5879] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" HandleID="k8s-pod-network.f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Workload="ip--172--31--30--188-k8s-calico--kube--controllers--56c7c98bd9--hsxmb-eth0" Sep 12 17:13:54.740174 containerd[2020]: 2025-09-12 17:13:54.720 [INFO][5879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:54.740174 containerd[2020]: 2025-09-12 17:13:54.734 [INFO][5856] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8" Sep 12 17:13:54.741638 containerd[2020]: time="2025-09-12T17:13:54.740361545Z" level=info msg="TearDown network for sandbox \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\" successfully" Sep 12 17:13:54.757355 containerd[2020]: time="2025-09-12T17:13:54.756309569Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:13:54.759339 containerd[2020]: time="2025-09-12T17:13:54.757215473Z" level=info msg="RemovePodSandbox \"f6c15c223ca9cc0bc38d957a3c2402e4f5bcedbc82580a7cee40a52d44f475f8\" returns successfully" Sep 12 17:13:54.759339 containerd[2020]: time="2025-09-12T17:13:54.758557649Z" level=info msg="StopPodSandbox for \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\"" Sep 12 17:13:54.798551 containerd[2020]: time="2025-09-12T17:13:54.798372449Z" level=info msg="StartContainer for \"bcaa2422c2d9a4584868dee30e1b88b7abc7fba6d899b8d4880bd4ed0507f507\" returns successfully" Sep 12 17:13:54.840473 sshd[5885]: Accepted publickey for core from 147.75.109.163 port 44908 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:13:54.847200 sshd[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:13:54.870000 systemd-logind[1992]: New session 13 of user core. Sep 12 17:13:54.878939 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:13:55.038530 containerd[2020]: 2025-09-12 17:13:54.887 [WARNING][5922] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a763ccd-0bce-40dd-95cb-421d108768a3", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd", Pod:"csi-node-driver-cg78j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid8c8efd4f4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:55.038530 containerd[2020]: 2025-09-12 17:13:54.891 [INFO][5922] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Sep 12 17:13:55.038530 containerd[2020]: 2025-09-12 17:13:54.891 [INFO][5922] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" iface="eth0" netns="" Sep 12 17:13:55.038530 containerd[2020]: 2025-09-12 17:13:54.891 [INFO][5922] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Sep 12 17:13:55.038530 containerd[2020]: 2025-09-12 17:13:54.891 [INFO][5922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Sep 12 17:13:55.038530 containerd[2020]: 2025-09-12 17:13:54.961 [INFO][5936] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" HandleID="k8s-pod-network.6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Workload="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" Sep 12 17:13:55.038530 containerd[2020]: 2025-09-12 17:13:54.962 [INFO][5936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:55.038530 containerd[2020]: 2025-09-12 17:13:54.962 [INFO][5936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:55.038530 containerd[2020]: 2025-09-12 17:13:55.004 [WARNING][5936] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" HandleID="k8s-pod-network.6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Workload="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" Sep 12 17:13:55.038530 containerd[2020]: 2025-09-12 17:13:55.005 [INFO][5936] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" HandleID="k8s-pod-network.6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Workload="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" Sep 12 17:13:55.038530 containerd[2020]: 2025-09-12 17:13:55.015 [INFO][5936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:55.038530 containerd[2020]: 2025-09-12 17:13:55.028 [INFO][5922] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Sep 12 17:13:55.038530 containerd[2020]: time="2025-09-12T17:13:55.038337542Z" level=info msg="TearDown network for sandbox \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\" successfully" Sep 12 17:13:55.038530 containerd[2020]: time="2025-09-12T17:13:55.038381210Z" level=info msg="StopPodSandbox for \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\" returns successfully" Sep 12 17:13:55.043261 containerd[2020]: time="2025-09-12T17:13:55.041233430Z" level=info msg="RemovePodSandbox for \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\"" Sep 12 17:13:55.043261 containerd[2020]: time="2025-09-12T17:13:55.041295710Z" level=info msg="Forcibly stopping sandbox \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\"" Sep 12 17:13:55.143947 kubelet[3436]: I0912 17:13:55.141913 3436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bj892" podStartSLOduration=57.141888843 podStartE2EDuration="57.141888843s" podCreationTimestamp="2025-09-12 17:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:13:55.057896223 +0000 UTC m=+63.270979660" watchObservedRunningTime="2025-09-12 17:13:55.141888843 +0000 UTC m=+63.354972256" Sep 12 17:13:55.500639 containerd[2020]: 2025-09-12 17:13:55.317 [WARNING][5958] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a763ccd-0bce-40dd-95cb-421d108768a3", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd", Pod:"csi-node-driver-cg78j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid8c8efd4f4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:55.500639 containerd[2020]: 2025-09-12 17:13:55.317 [INFO][5958] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Sep 12 17:13:55.500639 containerd[2020]: 2025-09-12 17:13:55.317 [INFO][5958] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" iface="eth0" netns="" Sep 12 17:13:55.500639 containerd[2020]: 2025-09-12 17:13:55.317 [INFO][5958] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Sep 12 17:13:55.500639 containerd[2020]: 2025-09-12 17:13:55.318 [INFO][5958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Sep 12 17:13:55.500639 containerd[2020]: 2025-09-12 17:13:55.444 [INFO][5969] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" HandleID="k8s-pod-network.6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Workload="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" Sep 12 17:13:55.500639 containerd[2020]: 2025-09-12 17:13:55.444 [INFO][5969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:55.500639 containerd[2020]: 2025-09-12 17:13:55.446 [INFO][5969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:55.500639 containerd[2020]: 2025-09-12 17:13:55.482 [WARNING][5969] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" HandleID="k8s-pod-network.6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Workload="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" Sep 12 17:13:55.500639 containerd[2020]: 2025-09-12 17:13:55.482 [INFO][5969] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" HandleID="k8s-pod-network.6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Workload="ip--172--31--30--188-k8s-csi--node--driver--cg78j-eth0" Sep 12 17:13:55.500639 containerd[2020]: 2025-09-12 17:13:55.485 [INFO][5969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:55.500639 containerd[2020]: 2025-09-12 17:13:55.492 [INFO][5958] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6" Sep 12 17:13:55.500639 containerd[2020]: time="2025-09-12T17:13:55.500299997Z" level=info msg="TearDown network for sandbox \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\" successfully" Sep 12 17:13:55.517888 containerd[2020]: time="2025-09-12T17:13:55.517765841Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:13:55.518064 containerd[2020]: time="2025-09-12T17:13:55.517897469Z" level=info msg="RemovePodSandbox \"6a1b0042a18f028f5e73101e2297fd896f3a5c5e98d829b8a5b3f8ce0679c3e6\" returns successfully" Sep 12 17:13:55.520618 containerd[2020]: time="2025-09-12T17:13:55.520011725Z" level=info msg="StopPodSandbox for \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\"" Sep 12 17:13:55.542153 sshd[5885]: pam_unix(sshd:session): session closed for user core Sep 12 17:13:55.555073 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:13:55.557868 systemd[1]: sshd@12-172.31.30.188:22-147.75.109.163:44908.service: Deactivated successfully. Sep 12 17:13:55.603735 systemd-logind[1992]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:13:55.614289 systemd[1]: Started sshd@13-172.31.30.188:22-147.75.109.163:44914.service - OpenSSH per-connection server daemon (147.75.109.163:44914). Sep 12 17:13:55.629338 systemd-logind[1992]: Removed session 13. Sep 12 17:13:55.854950 systemd-networkd[1934]: calied69fc64fcc: Gained IPv6LL Sep 12 17:13:55.875986 sshd[5999]: Accepted publickey for core from 147.75.109.163 port 44914 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:13:55.884085 sshd[5999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:13:55.901402 systemd-logind[1992]: New session 14 of user core. Sep 12 17:13:55.910959 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:13:55.946403 containerd[2020]: 2025-09-12 17:13:55.752 [WARNING][5991] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0", GenerateName:"calico-apiserver-55d9c6cf68-", Namespace:"calico-apiserver", SelfLink:"", UID:"379e7da0-b94f-4766-ab46-ea563b6dd6ae", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d9c6cf68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71", Pod:"calico-apiserver-55d9c6cf68-z4b6c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb2e3f75923", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:55.946403 containerd[2020]: 2025-09-12 17:13:55.752 [INFO][5991] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Sep 12 17:13:55.946403 containerd[2020]: 2025-09-12 17:13:55.752 [INFO][5991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" iface="eth0" netns="" Sep 12 17:13:55.946403 containerd[2020]: 2025-09-12 17:13:55.752 [INFO][5991] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Sep 12 17:13:55.946403 containerd[2020]: 2025-09-12 17:13:55.752 [INFO][5991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Sep 12 17:13:55.946403 containerd[2020]: 2025-09-12 17:13:55.881 [INFO][6011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" HandleID="k8s-pod-network.29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" Sep 12 17:13:55.946403 containerd[2020]: 2025-09-12 17:13:55.882 [INFO][6011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:55.946403 containerd[2020]: 2025-09-12 17:13:55.882 [INFO][6011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:55.946403 containerd[2020]: 2025-09-12 17:13:55.928 [WARNING][6011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" HandleID="k8s-pod-network.29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" Sep 12 17:13:55.946403 containerd[2020]: 2025-09-12 17:13:55.929 [INFO][6011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" HandleID="k8s-pod-network.29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" Sep 12 17:13:55.946403 containerd[2020]: 2025-09-12 17:13:55.935 [INFO][6011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:55.946403 containerd[2020]: 2025-09-12 17:13:55.940 [INFO][5991] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Sep 12 17:13:55.949229 containerd[2020]: time="2025-09-12T17:13:55.948414235Z" level=info msg="TearDown network for sandbox \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\" successfully" Sep 12 17:13:55.949229 containerd[2020]: time="2025-09-12T17:13:55.948465283Z" level=info msg="StopPodSandbox for \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\" returns successfully" Sep 12 17:13:55.949229 containerd[2020]: time="2025-09-12T17:13:55.949179559Z" level=info msg="RemovePodSandbox for \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\"" Sep 12 17:13:55.949229 containerd[2020]: time="2025-09-12T17:13:55.949229707Z" level=info msg="Forcibly stopping sandbox \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\"" Sep 12 17:13:56.369370 sshd[5999]: pam_unix(sshd:session): session closed for user core Sep 12 17:13:56.379541 containerd[2020]: 2025-09-12 17:13:56.094 [WARNING][6026] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0", GenerateName:"calico-apiserver-55d9c6cf68-", Namespace:"calico-apiserver", SelfLink:"", UID:"379e7da0-b94f-4766-ab46-ea563b6dd6ae", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 13, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d9c6cf68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-188", ContainerID:"195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71", Pod:"calico-apiserver-55d9c6cf68-z4b6c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb2e3f75923", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:13:56.379541 containerd[2020]: 2025-09-12 17:13:56.095 [INFO][6026] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Sep 12 17:13:56.379541 containerd[2020]: 2025-09-12 17:13:56.095 [INFO][6026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" iface="eth0" netns="" Sep 12 17:13:56.379541 containerd[2020]: 2025-09-12 17:13:56.095 [INFO][6026] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Sep 12 17:13:56.379541 containerd[2020]: 2025-09-12 17:13:56.095 [INFO][6026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Sep 12 17:13:56.379541 containerd[2020]: 2025-09-12 17:13:56.289 [INFO][6039] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" HandleID="k8s-pod-network.29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" Sep 12 17:13:56.379541 containerd[2020]: 2025-09-12 17:13:56.290 [INFO][6039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:56.379541 containerd[2020]: 2025-09-12 17:13:56.290 [INFO][6039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:56.379541 containerd[2020]: 2025-09-12 17:13:56.350 [WARNING][6039] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" HandleID="k8s-pod-network.29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" Sep 12 17:13:56.379541 containerd[2020]: 2025-09-12 17:13:56.350 [INFO][6039] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" HandleID="k8s-pod-network.29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Workload="ip--172--31--30--188-k8s-calico--apiserver--55d9c6cf68--z4b6c-eth0" Sep 12 17:13:56.379541 containerd[2020]: 2025-09-12 17:13:56.359 [INFO][6039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:56.379541 containerd[2020]: 2025-09-12 17:13:56.364 [INFO][6026] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7" Sep 12 17:13:56.379541 containerd[2020]: time="2025-09-12T17:13:56.377255393Z" level=info msg="TearDown network for sandbox \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\" successfully" Sep 12 17:13:56.386547 systemd[1]: sshd@13-172.31.30.188:22-147.75.109.163:44914.service: Deactivated successfully. Sep 12 17:13:56.399340 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:13:56.404238 systemd-logind[1992]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:13:56.412076 containerd[2020]: time="2025-09-12T17:13:56.409546409Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:13:56.412076 containerd[2020]: time="2025-09-12T17:13:56.410410205Z" level=info msg="RemovePodSandbox \"29e757f11c572ffbe839470959d2c549403c9651a8da49ef0aa8532058d3f6a7\" returns successfully" Sep 12 17:13:56.413379 systemd-logind[1992]: Removed session 14. Sep 12 17:13:56.415814 containerd[2020]: time="2025-09-12T17:13:56.413745857Z" level=info msg="StopPodSandbox for \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\"" Sep 12 17:13:56.774679 containerd[2020]: 2025-09-12 17:13:56.617 [WARNING][6055] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" WorkloadEndpoint="ip--172--31--30--188-k8s-whisker--855747c87d--4t672-eth0" Sep 12 17:13:56.774679 containerd[2020]: 2025-09-12 17:13:56.618 [INFO][6055] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Sep 12 17:13:56.774679 containerd[2020]: 2025-09-12 17:13:56.618 [INFO][6055] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" iface="eth0" netns="" Sep 12 17:13:56.774679 containerd[2020]: 2025-09-12 17:13:56.618 [INFO][6055] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Sep 12 17:13:56.774679 containerd[2020]: 2025-09-12 17:13:56.618 [INFO][6055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Sep 12 17:13:56.774679 containerd[2020]: 2025-09-12 17:13:56.732 [INFO][6062] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" HandleID="k8s-pod-network.ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Workload="ip--172--31--30--188-k8s-whisker--855747c87d--4t672-eth0" Sep 12 17:13:56.774679 containerd[2020]: 2025-09-12 17:13:56.732 [INFO][6062] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:56.774679 containerd[2020]: 2025-09-12 17:13:56.733 [INFO][6062] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:56.774679 containerd[2020]: 2025-09-12 17:13:56.760 [WARNING][6062] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" HandleID="k8s-pod-network.ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Workload="ip--172--31--30--188-k8s-whisker--855747c87d--4t672-eth0" Sep 12 17:13:56.774679 containerd[2020]: 2025-09-12 17:13:56.760 [INFO][6062] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" HandleID="k8s-pod-network.ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Workload="ip--172--31--30--188-k8s-whisker--855747c87d--4t672-eth0" Sep 12 17:13:56.774679 containerd[2020]: 2025-09-12 17:13:56.764 [INFO][6062] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:56.774679 containerd[2020]: 2025-09-12 17:13:56.768 [INFO][6055] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Sep 12 17:13:56.774679 containerd[2020]: time="2025-09-12T17:13:56.774307867Z" level=info msg="TearDown network for sandbox \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\" successfully" Sep 12 17:13:56.774679 containerd[2020]: time="2025-09-12T17:13:56.774369247Z" level=info msg="StopPodSandbox for \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\" returns successfully" Sep 12 17:13:56.776499 containerd[2020]: time="2025-09-12T17:13:56.776435119Z" level=info msg="RemovePodSandbox for \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\"" Sep 12 17:13:56.776680 containerd[2020]: time="2025-09-12T17:13:56.776505751Z" level=info msg="Forcibly stopping sandbox \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\"" Sep 12 17:13:57.172766 containerd[2020]: 2025-09-12 17:13:56.966 [WARNING][6076] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" WorkloadEndpoint="ip--172--31--30--188-k8s-whisker--855747c87d--4t672-eth0" Sep 12 17:13:57.172766 containerd[2020]: 2025-09-12 17:13:56.968 [INFO][6076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Sep 12 17:13:57.172766 containerd[2020]: 2025-09-12 17:13:56.968 [INFO][6076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" iface="eth0" netns="" Sep 12 17:13:57.172766 containerd[2020]: 2025-09-12 17:13:56.968 [INFO][6076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Sep 12 17:13:57.172766 containerd[2020]: 2025-09-12 17:13:56.968 [INFO][6076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Sep 12 17:13:57.172766 containerd[2020]: 2025-09-12 17:13:57.077 [INFO][6083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" HandleID="k8s-pod-network.ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Workload="ip--172--31--30--188-k8s-whisker--855747c87d--4t672-eth0" Sep 12 17:13:57.172766 containerd[2020]: 2025-09-12 17:13:57.080 [INFO][6083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:13:57.172766 containerd[2020]: 2025-09-12 17:13:57.080 [INFO][6083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:13:57.172766 containerd[2020]: 2025-09-12 17:13:57.140 [WARNING][6083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" HandleID="k8s-pod-network.ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Workload="ip--172--31--30--188-k8s-whisker--855747c87d--4t672-eth0" Sep 12 17:13:57.172766 containerd[2020]: 2025-09-12 17:13:57.140 [INFO][6083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" HandleID="k8s-pod-network.ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Workload="ip--172--31--30--188-k8s-whisker--855747c87d--4t672-eth0" Sep 12 17:13:57.172766 containerd[2020]: 2025-09-12 17:13:57.152 [INFO][6083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:13:57.172766 containerd[2020]: 2025-09-12 17:13:57.164 [INFO][6076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c" Sep 12 17:13:57.172766 containerd[2020]: time="2025-09-12T17:13:57.172152449Z" level=info msg="TearDown network for sandbox \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\" successfully" Sep 12 17:13:57.179018 containerd[2020]: time="2025-09-12T17:13:57.178908833Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:13:57.179501 containerd[2020]: time="2025-09-12T17:13:57.179369501Z" level=info msg="RemovePodSandbox \"ce2eeb000537f8529590efa91c5904257cdb9f81079ad51be6127c1d1960a92c\" returns successfully" Sep 12 17:13:57.399413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3730643215.mount: Deactivated successfully. Sep 12 17:13:57.418775 containerd[2020]: time="2025-09-12T17:13:57.418042386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:57.420425 containerd[2020]: time="2025-09-12T17:13:57.420365658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 17:13:57.422699 containerd[2020]: time="2025-09-12T17:13:57.422640582Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:57.428480 containerd[2020]: time="2025-09-12T17:13:57.428320302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:13:57.430746 containerd[2020]: time="2025-09-12T17:13:57.430664502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 3.994314931s" Sep 12 17:13:57.430746 containerd[2020]: time="2025-09-12T17:13:57.430737402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 17:13:57.435771 containerd[2020]: time="2025-09-12T17:13:57.435714258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:13:57.444455 containerd[2020]: time="2025-09-12T17:13:57.444381798Z" level=info msg="CreateContainer within sandbox \"9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:13:57.496664 containerd[2020]: time="2025-09-12T17:13:57.495867739Z" level=info msg="CreateContainer within sandbox \"9d1ce091940376cbf2a27a6473e8c1a774f2b9e8238ac8cd261b947dbc97aec2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0e4749f046f4754c066186545f84bdeebce2a304e706ed9c1fe6973babc4dffc\"" Sep 12 17:13:57.503001 containerd[2020]: time="2025-09-12T17:13:57.502497979Z" level=info msg="StartContainer for \"0e4749f046f4754c066186545f84bdeebce2a304e706ed9c1fe6973babc4dffc\"" Sep 12 17:13:57.576933 systemd[1]: Started cri-containerd-0e4749f046f4754c066186545f84bdeebce2a304e706ed9c1fe6973babc4dffc.scope - libcontainer container 0e4749f046f4754c066186545f84bdeebce2a304e706ed9c1fe6973babc4dffc. Sep 12 17:13:57.662690 containerd[2020]: time="2025-09-12T17:13:57.662521400Z" level=info msg="StartContainer for \"0e4749f046f4754c066186545f84bdeebce2a304e706ed9c1fe6973babc4dffc\" returns successfully" Sep 12 17:13:58.527270 ntpd[1987]: Listen normally on 8 vxlan.calico 192.168.103.0:123 Sep 12 17:13:58.528855 ntpd[1987]: 12 Sep 17:13:58 ntpd[1987]: Listen normally on 8 vxlan.calico 192.168.103.0:123 Sep 12 17:13:58.528855 ntpd[1987]: 12 Sep 17:13:58 ntpd[1987]: Listen normally on 9 caliafad6e6af0d [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:13:58.528855 ntpd[1987]: 12 Sep 17:13:58 ntpd[1987]: Listen normally on 10 calid8c8efd4f4b [fe80::ecee:eeff:feee:eeee%5]:123 Sep 12 17:13:58.528855 ntpd[1987]: 12 Sep 17:13:58 ntpd[1987]: Listen normally on 11 vxlan.calico [fe80::64dd:80ff:febf:104c%6]:123 Sep 12 17:13:58.528855 ntpd[1987]: 12 Sep 17:13:58 ntpd[1987]: Listen normally on 12 cali966798fcb58 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:13:58.528855 ntpd[1987]: 12 Sep 17:13:58 ntpd[1987]: Listen normally on 13 calieb2e3f75923 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:13:58.528855 ntpd[1987]: 12 Sep 17:13:58 ntpd[1987]: Listen normally on 14 cali364c81f2984 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:13:58.528855 ntpd[1987]: 12 Sep 17:13:58 ntpd[1987]: Listen normally on 15 calie83c58df7fe [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:13:58.528855 ntpd[1987]: 12 Sep 17:13:58 ntpd[1987]: Listen normally on 16 cali1738b3d4f5c [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:13:58.528855 ntpd[1987]: 12 Sep 17:13:58 ntpd[1987]: Listen normally on 17 calied69fc64fcc [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:13:58.527410 ntpd[1987]: Listen normally on 9 caliafad6e6af0d [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:13:58.527493 ntpd[1987]: Listen normally on 10 calid8c8efd4f4b [fe80::ecee:eeff:feee:eeee%5]:123 Sep 12 17:13:58.527566 ntpd[1987]: Listen normally on 11 vxlan.calico [fe80::64dd:80ff:febf:104c%6]:123 Sep 12 17:13:58.527713 ntpd[1987]: Listen normally on 12 cali966798fcb58 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:13:58.527793 ntpd[1987]: Listen normally on 13 calieb2e3f75923 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:13:58.527862 ntpd[1987]: Listen normally on 14 cali364c81f2984 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:13:58.527935 ntpd[1987]: Listen normally on 15 calie83c58df7fe [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:13:58.528013 ntpd[1987]: Listen normally on 16 cali1738b3d4f5c [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:13:58.528085 ntpd[1987]: Listen normally on 17 calied69fc64fcc [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:14:00.382819 containerd[2020]: time="2025-09-12T17:14:00.382722477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:00.384645 containerd[2020]: time="2025-09-12T17:14:00.384501885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 17:14:00.386834 containerd[2020]: time="2025-09-12T17:14:00.386701473Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:00.393495 containerd[2020]: time="2025-09-12T17:14:00.393388797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:00.396385 containerd[2020]: time="2025-09-12T17:14:00.395245605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.958971943s" Sep 12 17:14:00.396385 containerd[2020]: time="2025-09-12T17:14:00.395325561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 17:14:00.399180 containerd[2020]: time="2025-09-12T17:14:00.398688969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:14:00.442435 containerd[2020]: time="2025-09-12T17:14:00.442365309Z" level=info msg="CreateContainer within sandbox \"3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:14:00.475588 containerd[2020]: time="2025-09-12T17:14:00.475389177Z" level=info msg="CreateContainer within sandbox \"3cfc10f4bc6a82365490b783c6e15ed4ff73a3ffa53b6461743b974dd3853d30\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"314d2202ea04f03ced7ae910374e9b7c7c4e9fa7933980e95fda9d89d48c296e\"" Sep 12 17:14:00.477499 containerd[2020]: time="2025-09-12T17:14:00.477264729Z" level=info msg="StartContainer for \"314d2202ea04f03ced7ae910374e9b7c7c4e9fa7933980e95fda9d89d48c296e\"" Sep 12 17:14:00.581115 systemd[1]: Started cri-containerd-314d2202ea04f03ced7ae910374e9b7c7c4e9fa7933980e95fda9d89d48c296e.scope - libcontainer container 314d2202ea04f03ced7ae910374e9b7c7c4e9fa7933980e95fda9d89d48c296e. Sep 12 17:14:00.661998 containerd[2020]: time="2025-09-12T17:14:00.661675726Z" level=info msg="StartContainer for \"314d2202ea04f03ced7ae910374e9b7c7c4e9fa7933980e95fda9d89d48c296e\" returns successfully" Sep 12 17:14:01.188273 kubelet[3436]: I0912 17:14:01.187358 3436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6bc75cf845-pvlzj" podStartSLOduration=5.470577372 podStartE2EDuration="16.187330281s" podCreationTimestamp="2025-09-12 17:13:45 +0000 UTC" firstStartedPulling="2025-09-12 17:13:46.716691141 +0000 UTC m=+54.929774554" lastFinishedPulling="2025-09-12 17:13:57.433444062 +0000 UTC m=+65.646527463" observedRunningTime="2025-09-12 17:13:58.11420253 +0000 UTC m=+66.327285955" watchObservedRunningTime="2025-09-12 17:14:01.187330281 +0000 UTC m=+69.400413694" Sep 12 17:14:01.414145 systemd[1]: Started sshd@14-172.31.30.188:22-147.75.109.163:42570.service - OpenSSH per-connection server daemon (147.75.109.163:42570). Sep 12 17:14:01.427894 kubelet[3436]: I0912 17:14:01.426978 3436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-56c7c98bd9-hsxmb" podStartSLOduration=27.446926549 podStartE2EDuration="36.426954298s" podCreationTimestamp="2025-09-12 17:13:25 +0000 UTC" firstStartedPulling="2025-09-12 17:13:51.41766996 +0000 UTC m=+59.630753385" lastFinishedPulling="2025-09-12 17:14:00.397697721 +0000 UTC m=+68.610781134" observedRunningTime="2025-09-12 17:14:01.189090033 +0000 UTC m=+69.402173434" watchObservedRunningTime="2025-09-12 17:14:01.426954298 +0000 UTC m=+69.640037807" Sep 12 17:14:01.622729 sshd[6219]: Accepted publickey for core from 147.75.109.163 port 42570 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:01.627784 sshd[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:01.642468 systemd-logind[1992]: New session 15 of user core. Sep 12 17:14:01.648925 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:14:01.920339 sshd[6219]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:01.932187 systemd[1]: sshd@14-172.31.30.188:22-147.75.109.163:42570.service: Deactivated successfully. Sep 12 17:14:01.937447 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:14:01.939987 systemd-logind[1992]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:14:01.944091 systemd-logind[1992]: Removed session 15. Sep 12 17:14:04.748202 containerd[2020]: time="2025-09-12T17:14:04.748084791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:04.750472 containerd[2020]: time="2025-09-12T17:14:04.750177759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 17:14:04.752489 containerd[2020]: time="2025-09-12T17:14:04.752408211Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:04.761122 containerd[2020]: time="2025-09-12T17:14:04.759589395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:04.761122 containerd[2020]: time="2025-09-12T17:14:04.760963611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 4.36221109s" Sep 12 17:14:04.761122 containerd[2020]: time="2025-09-12T17:14:04.761010195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:14:04.764160 containerd[2020]: time="2025-09-12T17:14:04.764111139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:14:04.771327 containerd[2020]: time="2025-09-12T17:14:04.771253047Z" level=info msg="CreateContainer within sandbox \"195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:14:04.802023 containerd[2020]: time="2025-09-12T17:14:04.801840051Z" level=info msg="CreateContainer within sandbox \"195bfd8e617587880cfd9e185ce729f5b9d003732604e9f97c3a8b41adba1f71\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4028515a2b21b52ded2cad5f3434bfd13ee50847767f525d9c47d3e08bec22d3\"" Sep 12 17:14:04.804620 containerd[2020]: time="2025-09-12T17:14:04.804185223Z" level=info msg="StartContainer for \"4028515a2b21b52ded2cad5f3434bfd13ee50847767f525d9c47d3e08bec22d3\"" Sep 12 17:14:04.887912 systemd[1]: Started cri-containerd-4028515a2b21b52ded2cad5f3434bfd13ee50847767f525d9c47d3e08bec22d3.scope - libcontainer container 4028515a2b21b52ded2cad5f3434bfd13ee50847767f525d9c47d3e08bec22d3. Sep 12 17:14:04.967053 containerd[2020]: time="2025-09-12T17:14:04.966924232Z" level=info msg="StartContainer for \"4028515a2b21b52ded2cad5f3434bfd13ee50847767f525d9c47d3e08bec22d3\" returns successfully" Sep 12 17:14:05.154760 containerd[2020]: time="2025-09-12T17:14:05.153558409Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:05.159680 containerd[2020]: time="2025-09-12T17:14:05.158925937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:14:05.163278 containerd[2020]: time="2025-09-12T17:14:05.163191997Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 398.780486ms" Sep 12 17:14:05.163278 containerd[2020]: time="2025-09-12T17:14:05.163262173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:14:05.169179 containerd[2020]: time="2025-09-12T17:14:05.167867521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:14:05.173800 kubelet[3436]: I0912 17:14:05.173091 3436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55d9c6cf68-z4b6c" podStartSLOduration=41.056878044 podStartE2EDuration="54.173070241s" podCreationTimestamp="2025-09-12 17:13:11 +0000 UTC" firstStartedPulling="2025-09-12 17:13:51.646716146 +0000 UTC m=+59.859799559" lastFinishedPulling="2025-09-12 17:14:04.762908247 +0000 UTC m=+72.975991756" observedRunningTime="2025-09-12 17:14:05.167151481 +0000 UTC m=+73.380234906" watchObservedRunningTime="2025-09-12 17:14:05.173070241 +0000 UTC m=+73.386153642" Sep 12 17:14:05.178963 containerd[2020]: time="2025-09-12T17:14:05.178538677Z" level=info msg="CreateContainer within sandbox \"b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:14:05.220718 containerd[2020]: time="2025-09-12T17:14:05.219130669Z" level=info msg="CreateContainer within sandbox \"b61e622f08aefd3c45ebe37e49af5f9f355347f8bd5e20f59004a31e659aea73\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"16319c5446aded35d9a5578cdcbbe765783527e81c7b60cf6ff8148694989053\"" Sep 12 17:14:05.223903 containerd[2020]: time="2025-09-12T17:14:05.223835065Z" level=info msg="StartContainer for \"16319c5446aded35d9a5578cdcbbe765783527e81c7b60cf6ff8148694989053\"" Sep 12 17:14:05.289999 systemd[1]: Started cri-containerd-16319c5446aded35d9a5578cdcbbe765783527e81c7b60cf6ff8148694989053.scope - libcontainer container 16319c5446aded35d9a5578cdcbbe765783527e81c7b60cf6ff8148694989053. Sep 12 17:14:05.399972 containerd[2020]: time="2025-09-12T17:14:05.397150262Z" level=info msg="StartContainer for \"16319c5446aded35d9a5578cdcbbe765783527e81c7b60cf6ff8148694989053\" returns successfully" Sep 12 17:14:06.153110 kubelet[3436]: I0912 17:14:06.153020 3436 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:14:06.965577 systemd[1]: Started sshd@15-172.31.30.188:22-147.75.109.163:42580.service - OpenSSH per-connection server daemon (147.75.109.163:42580). Sep 12 17:14:07.219655 sshd[6331]: Accepted publickey for core from 147.75.109.163 port 42580 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:07.225395 sshd[6331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:07.257018 systemd-logind[1992]: New session 16 of user core. Sep 12 17:14:07.259990 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:14:07.690941 sshd[6331]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:07.708401 systemd[1]: sshd@15-172.31.30.188:22-147.75.109.163:42580.service: Deactivated successfully. Sep 12 17:14:07.723793 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:14:07.728995 systemd-logind[1992]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:14:07.735848 systemd-logind[1992]: Removed session 16. Sep 12 17:14:08.016995 kubelet[3436]: I0912 17:14:08.016828 3436 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:14:08.666692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2605403055.mount: Deactivated successfully. Sep 12 17:14:08.856323 kubelet[3436]: I0912 17:14:08.856233 3436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55d9c6cf68-vs5cz" podStartSLOduration=45.241895248 podStartE2EDuration="57.856211899s" podCreationTimestamp="2025-09-12 17:13:11 +0000 UTC" firstStartedPulling="2025-09-12 17:13:52.551058878 +0000 UTC m=+60.764142291" lastFinishedPulling="2025-09-12 17:14:05.165375541 +0000 UTC m=+73.378458942" observedRunningTime="2025-09-12 17:14:06.18328679 +0000 UTC m=+74.396370215" watchObservedRunningTime="2025-09-12 17:14:08.856211899 +0000 UTC m=+77.069295312" Sep 12 17:14:09.869440 containerd[2020]: time="2025-09-12T17:14:09.869364248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:09.871527 containerd[2020]: time="2025-09-12T17:14:09.871460156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 17:14:09.873694 containerd[2020]: time="2025-09-12T17:14:09.872387192Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:09.879798 containerd[2020]: time="2025-09-12T17:14:09.879698672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:09.881783 containerd[2020]: time="2025-09-12T17:14:09.881696672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 4.713766067s" Sep 12 17:14:09.881783 containerd[2020]: time="2025-09-12T17:14:09.881768588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 17:14:09.884171 containerd[2020]: time="2025-09-12T17:14:09.884095460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:14:09.892489 containerd[2020]: time="2025-09-12T17:14:09.892067204Z" level=info msg="CreateContainer within sandbox \"42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:14:09.922642 containerd[2020]: time="2025-09-12T17:14:09.922544780Z" level=info msg="CreateContainer within sandbox \"42d21db1f7b94a5e28ce6702221c9fb1788b96fe65922afe583bdeec53d365e7\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fc126e9c79b7c953d4770447dd84d046b833ea010244342672c25ea7a47a62d4\"" Sep 12 17:14:09.924963 containerd[2020]: time="2025-09-12T17:14:09.924664772Z" level=info msg="StartContainer for \"fc126e9c79b7c953d4770447dd84d046b833ea010244342672c25ea7a47a62d4\"" Sep 12 17:14:10.023402 systemd[1]: run-containerd-runc-k8s.io-fc126e9c79b7c953d4770447dd84d046b833ea010244342672c25ea7a47a62d4-runc.EK0ipC.mount: Deactivated successfully. Sep 12 17:14:10.037966 systemd[1]: Started cri-containerd-fc126e9c79b7c953d4770447dd84d046b833ea010244342672c25ea7a47a62d4.scope - libcontainer container fc126e9c79b7c953d4770447dd84d046b833ea010244342672c25ea7a47a62d4. Sep 12 17:14:10.207776 containerd[2020]: time="2025-09-12T17:14:10.207696018Z" level=info msg="StartContainer for \"fc126e9c79b7c953d4770447dd84d046b833ea010244342672c25ea7a47a62d4\" returns successfully" Sep 12 17:14:11.256360 kubelet[3436]: I0912 17:14:11.256210 3436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-28q4g" podStartSLOduration=28.975726858 podStartE2EDuration="45.256178119s" podCreationTimestamp="2025-09-12 17:13:26 +0000 UTC" firstStartedPulling="2025-09-12 17:13:53.602691075 +0000 UTC m=+61.815774476" lastFinishedPulling="2025-09-12 17:14:09.883142276 +0000 UTC m=+78.096225737" observedRunningTime="2025-09-12 17:14:11.255875143 +0000 UTC m=+79.468958544" watchObservedRunningTime="2025-09-12 17:14:11.256178119 +0000 UTC m=+79.469261520" Sep 12 17:14:11.345156 systemd[1]: run-containerd-runc-k8s.io-fc126e9c79b7c953d4770447dd84d046b833ea010244342672c25ea7a47a62d4-runc.ALlH6L.mount: Deactivated successfully. Sep 12 17:14:11.652024 containerd[2020]: time="2025-09-12T17:14:11.650874741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:11.653690 containerd[2020]: time="2025-09-12T17:14:11.652838277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 17:14:11.654762 containerd[2020]: time="2025-09-12T17:14:11.654647649Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:11.661163 containerd[2020]: time="2025-09-12T17:14:11.661057989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:11.664434 containerd[2020]: time="2025-09-12T17:14:11.663266181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.779092865s" Sep 12 17:14:11.664434 containerd[2020]: time="2025-09-12T17:14:11.663365349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 17:14:11.671080 containerd[2020]: time="2025-09-12T17:14:11.671012721Z" level=info msg="CreateContainer within sandbox \"05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:14:11.698676 containerd[2020]: time="2025-09-12T17:14:11.698456505Z" level=info msg="CreateContainer within sandbox \"05c4021828bab9ed2bb98fd3135c7dadab6e35fee4ea7fb1c248e69ee5a939bd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3011e395acfd5f0270ecb7f420a5c0786f6d74b161cd0600bf46ab86fa088c80\"" Sep 12 17:14:11.699932 containerd[2020]: time="2025-09-12T17:14:11.699871209Z" level=info msg="StartContainer for \"3011e395acfd5f0270ecb7f420a5c0786f6d74b161cd0600bf46ab86fa088c80\"" Sep 12 17:14:11.778947 systemd[1]: Started cri-containerd-3011e395acfd5f0270ecb7f420a5c0786f6d74b161cd0600bf46ab86fa088c80.scope - libcontainer container 3011e395acfd5f0270ecb7f420a5c0786f6d74b161cd0600bf46ab86fa088c80. Sep 12 17:14:11.851880 containerd[2020]: time="2025-09-12T17:14:11.851688718Z" level=info msg="StartContainer for \"3011e395acfd5f0270ecb7f420a5c0786f6d74b161cd0600bf46ab86fa088c80\" returns successfully" Sep 12 17:14:12.273759 kubelet[3436]: I0912 17:14:12.273481 3436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cg78j" podStartSLOduration=23.774816575 podStartE2EDuration="47.273453464s" podCreationTimestamp="2025-09-12 17:13:25 +0000 UTC" firstStartedPulling="2025-09-12 17:13:48.166360196 +0000 UTC m=+56.379443621" lastFinishedPulling="2025-09-12 17:14:11.664997109 +0000 UTC m=+79.878080510" observedRunningTime="2025-09-12 17:14:12.27215126 +0000 UTC m=+80.485234769" watchObservedRunningTime="2025-09-12 17:14:12.273453464 +0000 UTC m=+80.486536877" Sep 12 17:14:12.287557 kubelet[3436]: I0912 17:14:12.287479 3436 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:14:12.287557 kubelet[3436]: I0912 17:14:12.287563 3436 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:14:12.739221 systemd[1]: Started sshd@16-172.31.30.188:22-147.75.109.163:59398.service - OpenSSH per-connection server daemon (147.75.109.163:59398). Sep 12 17:14:12.962895 sshd[6490]: Accepted publickey for core from 147.75.109.163 port 59398 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:12.971094 sshd[6490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:12.992518 systemd-logind[1992]: New session 17 of user core. Sep 12 17:14:13.002000 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:14:13.412784 sshd[6490]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:13.421847 systemd[1]: sshd@16-172.31.30.188:22-147.75.109.163:59398.service: Deactivated successfully. Sep 12 17:14:13.428265 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:14:13.436698 systemd-logind[1992]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:14:13.440351 systemd-logind[1992]: Removed session 17. Sep 12 17:14:17.203834 systemd[1]: run-containerd-runc-k8s.io-fc126e9c79b7c953d4770447dd84d046b833ea010244342672c25ea7a47a62d4-runc.F16z0Z.mount: Deactivated successfully. Sep 12 17:14:18.459815 systemd[1]: Started sshd@17-172.31.30.188:22-147.75.109.163:59410.service - OpenSSH per-connection server daemon (147.75.109.163:59410). Sep 12 17:14:18.676126 sshd[6574]: Accepted publickey for core from 147.75.109.163 port 59410 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:18.681512 sshd[6574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:18.696342 systemd-logind[1992]: New session 18 of user core. Sep 12 17:14:18.705057 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:14:19.062533 sshd[6574]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:19.073299 systemd[1]: sshd@17-172.31.30.188:22-147.75.109.163:59410.service: Deactivated successfully. Sep 12 17:14:19.083403 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:14:19.088859 systemd-logind[1992]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:14:19.117194 systemd[1]: Started sshd@18-172.31.30.188:22-147.75.109.163:59414.service - OpenSSH per-connection server daemon (147.75.109.163:59414). Sep 12 17:14:19.121464 systemd-logind[1992]: Removed session 18. Sep 12 17:14:19.318747 sshd[6587]: Accepted publickey for core from 147.75.109.163 port 59414 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:19.322566 sshd[6587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:19.338623 systemd-logind[1992]: New session 19 of user core. Sep 12 17:14:19.344967 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:14:20.170474 sshd[6587]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:20.181453 systemd[1]: sshd@18-172.31.30.188:22-147.75.109.163:59414.service: Deactivated successfully. Sep 12 17:14:20.194011 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:14:20.219059 systemd-logind[1992]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:14:20.228188 systemd[1]: Started sshd@19-172.31.30.188:22-147.75.109.163:33844.service - OpenSSH per-connection server daemon (147.75.109.163:33844). Sep 12 17:14:20.235423 systemd-logind[1992]: Removed session 19. Sep 12 17:14:20.452662 sshd[6598]: Accepted publickey for core from 147.75.109.163 port 33844 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:20.456381 sshd[6598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:20.472293 systemd-logind[1992]: New session 20 of user core. Sep 12 17:14:20.477932 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:14:22.178178 sshd[6598]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:22.189735 systemd[1]: sshd@19-172.31.30.188:22-147.75.109.163:33844.service: Deactivated successfully. Sep 12 17:14:22.203254 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:14:22.213179 systemd-logind[1992]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:14:22.247029 systemd[1]: Started sshd@20-172.31.30.188:22-147.75.109.163:33854.service - OpenSSH per-connection server daemon (147.75.109.163:33854). Sep 12 17:14:22.250676 systemd-logind[1992]: Removed session 20. Sep 12 17:14:22.454194 sshd[6614]: Accepted publickey for core from 147.75.109.163 port 33854 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:22.457652 sshd[6614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:22.470710 systemd-logind[1992]: New session 21 of user core. Sep 12 17:14:22.476380 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:14:23.187987 sshd[6614]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:23.203391 systemd[1]: sshd@20-172.31.30.188:22-147.75.109.163:33854.service: Deactivated successfully. Sep 12 17:14:23.210208 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:14:23.214706 systemd-logind[1992]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:14:23.238204 systemd[1]: Started sshd@21-172.31.30.188:22-147.75.109.163:33860.service - OpenSSH per-connection server daemon (147.75.109.163:33860). Sep 12 17:14:23.243743 systemd-logind[1992]: Removed session 21. Sep 12 17:14:23.428450 sshd[6628]: Accepted publickey for core from 147.75.109.163 port 33860 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:23.430476 sshd[6628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:23.443955 systemd-logind[1992]: New session 22 of user core. Sep 12 17:14:23.457328 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:14:23.765014 sshd[6628]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:23.776336 systemd-logind[1992]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:14:23.779172 systemd[1]: sshd@21-172.31.30.188:22-147.75.109.163:33860.service: Deactivated successfully. Sep 12 17:14:23.797378 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:14:23.808096 systemd-logind[1992]: Removed session 22. Sep 12 17:14:28.814220 systemd[1]: Started sshd@22-172.31.30.188:22-147.75.109.163:33862.service - OpenSSH per-connection server daemon (147.75.109.163:33862). Sep 12 17:14:29.007684 sshd[6644]: Accepted publickey for core from 147.75.109.163 port 33862 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:29.009844 sshd[6644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:29.024298 systemd-logind[1992]: New session 23 of user core. Sep 12 17:14:29.030798 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:14:29.314441 sshd[6644]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:29.326148 systemd-logind[1992]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:14:29.327823 systemd[1]: sshd@22-172.31.30.188:22-147.75.109.163:33862.service: Deactivated successfully. Sep 12 17:14:29.333736 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:14:29.336419 systemd-logind[1992]: Removed session 23. Sep 12 17:14:34.364154 systemd[1]: Started sshd@23-172.31.30.188:22-147.75.109.163:55718.service - OpenSSH per-connection server daemon (147.75.109.163:55718). Sep 12 17:14:34.550764 sshd[6684]: Accepted publickey for core from 147.75.109.163 port 55718 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:34.552947 sshd[6684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:34.569914 systemd-logind[1992]: New session 24 of user core. Sep 12 17:14:34.574994 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:14:34.905189 sshd[6684]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:34.917458 systemd[1]: sshd@23-172.31.30.188:22-147.75.109.163:55718.service: Deactivated successfully. Sep 12 17:14:34.927561 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:14:34.930015 systemd-logind[1992]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:14:34.934187 systemd-logind[1992]: Removed session 24. Sep 12 17:14:39.945156 systemd[1]: Started sshd@24-172.31.30.188:22-147.75.109.163:45338.service - OpenSSH per-connection server daemon (147.75.109.163:45338). Sep 12 17:14:40.146664 sshd[6700]: Accepted publickey for core from 147.75.109.163 port 45338 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:40.149813 sshd[6700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:40.161570 systemd-logind[1992]: New session 25 of user core. Sep 12 17:14:40.169942 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:14:40.451421 sshd[6700]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:40.466070 systemd[1]: sshd@24-172.31.30.188:22-147.75.109.163:45338.service: Deactivated successfully. Sep 12 17:14:40.473753 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:14:40.475651 systemd-logind[1992]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:14:40.478354 systemd-logind[1992]: Removed session 25. Sep 12 17:14:43.293085 systemd[1]: run-containerd-runc-k8s.io-fc126e9c79b7c953d4770447dd84d046b833ea010244342672c25ea7a47a62d4-runc.IntiJR.mount: Deactivated successfully. Sep 12 17:14:45.492757 systemd[1]: Started sshd@25-172.31.30.188:22-147.75.109.163:45348.service - OpenSSH per-connection server daemon (147.75.109.163:45348). Sep 12 17:14:45.703464 sshd[6736]: Accepted publickey for core from 147.75.109.163 port 45348 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:45.706729 sshd[6736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:45.717105 systemd-logind[1992]: New session 26 of user core. Sep 12 17:14:45.723924 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:14:46.041166 sshd[6736]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:46.047297 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:14:46.048503 systemd[1]: sshd@25-172.31.30.188:22-147.75.109.163:45348.service: Deactivated successfully. Sep 12 17:14:46.057863 systemd-logind[1992]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:14:46.060765 systemd-logind[1992]: Removed session 26. Sep 12 17:14:51.087278 systemd[1]: Started sshd@26-172.31.30.188:22-147.75.109.163:60722.service - OpenSSH per-connection server daemon (147.75.109.163:60722). Sep 12 17:14:51.277868 sshd[6773]: Accepted publickey for core from 147.75.109.163 port 60722 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:51.282049 sshd[6773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:51.294450 systemd-logind[1992]: New session 27 of user core. Sep 12 17:14:51.302351 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:14:51.602391 sshd[6773]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:51.611871 systemd[1]: sshd@26-172.31.30.188:22-147.75.109.163:60722.service: Deactivated successfully. Sep 12 17:14:51.617477 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:14:51.620694 systemd-logind[1992]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:14:51.624078 systemd-logind[1992]: Removed session 27. Sep 12 17:14:57.186453 containerd[2020]: time="2025-09-12T17:14:57.186280179Z" level=info msg="StopPodSandbox for \"752494cb4fe8ed402d8d53a9be3f3b291797ec7e30d0731a5a4e12ef7f0c4468\"" Sep 12 17:15:01.150036 systemd[1]: run-containerd-runc-k8s.io-314d2202ea04f03ced7ae910374e9b7c7c4e9fa7933980e95fda9d89d48c296e-runc.KgMxqQ.mount: Deactivated successfully. Sep 12 17:15:06.247821 systemd[1]: cri-containerd-2919f70d6ea425ab4d3944115a3191ea07b88581c978f5948a45338d3b7abdc9.scope: Deactivated successfully. Sep 12 17:15:06.249857 systemd[1]: cri-containerd-2919f70d6ea425ab4d3944115a3191ea07b88581c978f5948a45338d3b7abdc9.scope: Consumed 30.978s CPU time. Sep 12 17:15:06.298619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2919f70d6ea425ab4d3944115a3191ea07b88581c978f5948a45338d3b7abdc9-rootfs.mount: Deactivated successfully. Sep 12 17:15:06.325307 containerd[2020]: time="2025-09-12T17:15:06.297768192Z" level=info msg="shim disconnected" id=2919f70d6ea425ab4d3944115a3191ea07b88581c978f5948a45338d3b7abdc9 namespace=k8s.io Sep 12 17:15:06.325307 containerd[2020]: time="2025-09-12T17:15:06.325291093Z" level=warning msg="cleaning up after shim disconnected" id=2919f70d6ea425ab4d3944115a3191ea07b88581c978f5948a45338d3b7abdc9 namespace=k8s.io Sep 12 17:15:06.326138 containerd[2020]: time="2025-09-12T17:15:06.325320529Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:15:06.448241 kubelet[3436]: I0912 17:15:06.447215 3436 scope.go:117] "RemoveContainer" containerID="2919f70d6ea425ab4d3944115a3191ea07b88581c978f5948a45338d3b7abdc9" Sep 12 17:15:06.453655 containerd[2020]: time="2025-09-12T17:15:06.453154549Z" level=info msg="CreateContainer within sandbox \"4b668878ef17705e3764d6f819c2af75887156685a9e226baf40bac0f4fb4705\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 17:15:06.498769 containerd[2020]: time="2025-09-12T17:15:06.498085129Z" level=info msg="CreateContainer within sandbox \"4b668878ef17705e3764d6f819c2af75887156685a9e226baf40bac0f4fb4705\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d612ad2b02288b0ddbe24faacdd87e0decfea30dabea8d6a60c778d7d5261d3e\"" Sep 12 17:15:06.500102 containerd[2020]: time="2025-09-12T17:15:06.499247269Z" level=info msg="StartContainer for \"d612ad2b02288b0ddbe24faacdd87e0decfea30dabea8d6a60c778d7d5261d3e\"" Sep 12 17:15:06.576961 systemd[1]: Started cri-containerd-d612ad2b02288b0ddbe24faacdd87e0decfea30dabea8d6a60c778d7d5261d3e.scope - libcontainer container d612ad2b02288b0ddbe24faacdd87e0decfea30dabea8d6a60c778d7d5261d3e. Sep 12 17:15:06.637579 containerd[2020]: time="2025-09-12T17:15:06.637480214Z" level=info msg="StartContainer for \"d612ad2b02288b0ddbe24faacdd87e0decfea30dabea8d6a60c778d7d5261d3e\" returns successfully" Sep 12 17:15:06.942221 systemd[1]: cri-containerd-a63e4fbf207722a65c5f6c46e6b10ebcaf4d726d688c4f691540d8e0c2a447fc.scope: Deactivated successfully. Sep 12 17:15:06.944925 systemd[1]: cri-containerd-a63e4fbf207722a65c5f6c46e6b10ebcaf4d726d688c4f691540d8e0c2a447fc.scope: Consumed 5.820s CPU time, 17.4M memory peak, 0B memory swap peak. Sep 12 17:15:06.992050 containerd[2020]: time="2025-09-12T17:15:06.991948816Z" level=info msg="shim disconnected" id=a63e4fbf207722a65c5f6c46e6b10ebcaf4d726d688c4f691540d8e0c2a447fc namespace=k8s.io Sep 12 17:15:06.992050 containerd[2020]: time="2025-09-12T17:15:06.992027944Z" level=warning msg="cleaning up after shim disconnected" id=a63e4fbf207722a65c5f6c46e6b10ebcaf4d726d688c4f691540d8e0c2a447fc namespace=k8s.io Sep 12 17:15:06.992050 containerd[2020]: time="2025-09-12T17:15:06.992050960Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:15:07.294701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a63e4fbf207722a65c5f6c46e6b10ebcaf4d726d688c4f691540d8e0c2a447fc-rootfs.mount: Deactivated successfully. Sep 12 17:15:07.453623 kubelet[3436]: I0912 17:15:07.453535 3436 scope.go:117] "RemoveContainer" containerID="a63e4fbf207722a65c5f6c46e6b10ebcaf4d726d688c4f691540d8e0c2a447fc" Sep 12 17:15:07.458948 containerd[2020]: time="2025-09-12T17:15:07.458390450Z" level=info msg="CreateContainer within sandbox \"6054f0bfa6d80d334cf2bf1607fa69d1c9dafacc1629ae3a89e4b0188c03230c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:15:07.494210 containerd[2020]: time="2025-09-12T17:15:07.493798478Z" level=info msg="CreateContainer within sandbox \"6054f0bfa6d80d334cf2bf1607fa69d1c9dafacc1629ae3a89e4b0188c03230c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d5aa4574f99389c8d0cc9d279f9e13c9805450671b7dbbe0bf634af12031dced\"" Sep 12 17:15:07.495008 containerd[2020]: time="2025-09-12T17:15:07.494934050Z" level=info msg="StartContainer for \"d5aa4574f99389c8d0cc9d279f9e13c9805450671b7dbbe0bf634af12031dced\"" Sep 12 17:15:07.571969 systemd[1]: Started cri-containerd-d5aa4574f99389c8d0cc9d279f9e13c9805450671b7dbbe0bf634af12031dced.scope - libcontainer container d5aa4574f99389c8d0cc9d279f9e13c9805450671b7dbbe0bf634af12031dced. Sep 12 17:15:07.657101 containerd[2020]: time="2025-09-12T17:15:07.656886099Z" level=info msg="StartContainer for \"d5aa4574f99389c8d0cc9d279f9e13c9805450671b7dbbe0bf634af12031dced\" returns successfully" Sep 12 17:15:08.297753 systemd[1]: run-containerd-runc-k8s.io-d5aa4574f99389c8d0cc9d279f9e13c9805450671b7dbbe0bf634af12031dced-runc.E0HDhp.mount: Deactivated successfully. Sep 12 17:15:11.810958 systemd[1]: cri-containerd-bea4dbe945e521fee30e3c4ce683868bf4936ddb567ad3efba054a27e27ca887.scope: Deactivated successfully. Sep 12 17:15:11.811502 systemd[1]: cri-containerd-bea4dbe945e521fee30e3c4ce683868bf4936ddb567ad3efba054a27e27ca887.scope: Consumed 5.063s CPU time, 14.3M memory peak, 0B memory swap peak. Sep 12 17:15:11.855282 containerd[2020]: time="2025-09-12T17:15:11.854973848Z" level=info msg="shim disconnected" id=bea4dbe945e521fee30e3c4ce683868bf4936ddb567ad3efba054a27e27ca887 namespace=k8s.io Sep 12 17:15:11.857644 containerd[2020]: time="2025-09-12T17:15:11.855914084Z" level=warning msg="cleaning up after shim disconnected" id=bea4dbe945e521fee30e3c4ce683868bf4936ddb567ad3efba054a27e27ca887 namespace=k8s.io Sep 12 17:15:11.857644 containerd[2020]: time="2025-09-12T17:15:11.855969176Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:15:11.868793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bea4dbe945e521fee30e3c4ce683868bf4936ddb567ad3efba054a27e27ca887-rootfs.mount: Deactivated successfully. Sep 12 17:15:12.481764 kubelet[3436]: I0912 17:15:12.481681 3436 scope.go:117] "RemoveContainer" containerID="bea4dbe945e521fee30e3c4ce683868bf4936ddb567ad3efba054a27e27ca887" Sep 12 17:15:12.486491 containerd[2020]: time="2025-09-12T17:15:12.486224983Z" level=info msg="CreateContainer within sandbox \"0c4c685aef336a70ce033f71dcaa51c6c52566f761a55718027d1fb4c87b77ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 17:15:12.516915 containerd[2020]: time="2025-09-12T17:15:12.516827167Z" level=info msg="CreateContainer within sandbox \"0c4c685aef336a70ce033f71dcaa51c6c52566f761a55718027d1fb4c87b77ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4a613a0474954685e2cb12b6ae857b2d4cb7f12a2113f645f60980a78c91039b\"" Sep 12 17:15:12.519080 containerd[2020]: time="2025-09-12T17:15:12.518026255Z" level=info msg="StartContainer for \"4a613a0474954685e2cb12b6ae857b2d4cb7f12a2113f645f60980a78c91039b\"" Sep 12 17:15:12.590034 systemd[1]: Started cri-containerd-4a613a0474954685e2cb12b6ae857b2d4cb7f12a2113f645f60980a78c91039b.scope - libcontainer container 4a613a0474954685e2cb12b6ae857b2d4cb7f12a2113f645f60980a78c91039b. Sep 12 17:15:12.670103 containerd[2020]: time="2025-09-12T17:15:12.670040588Z" level=info msg="StartContainer for \"4a613a0474954685e2cb12b6ae857b2d4cb7f12a2113f645f60980a78c91039b\" returns successfully" Sep 12 17:15:14.573757 kubelet[3436]: E0912 17:15:14.573675 3436 request.go:1360] "Unexpected error when reading response body" err="net/http: request canceled (Client.Timeout or context cancellation while reading body)" Sep 12 17:15:14.574421 kubelet[3436]: E0912 17:15:14.573824 3436 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Sep 12 17:15:18.102573 systemd[1]: cri-containerd-d612ad2b02288b0ddbe24faacdd87e0decfea30dabea8d6a60c778d7d5261d3e.scope: Deactivated successfully. Sep 12 17:15:18.145364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d612ad2b02288b0ddbe24faacdd87e0decfea30dabea8d6a60c778d7d5261d3e-rootfs.mount: Deactivated successfully. Sep 12 17:15:18.159468 containerd[2020]: time="2025-09-12T17:15:18.159094199Z" level=info msg="shim disconnected" id=d612ad2b02288b0ddbe24faacdd87e0decfea30dabea8d6a60c778d7d5261d3e namespace=k8s.io Sep 12 17:15:18.159468 containerd[2020]: time="2025-09-12T17:15:18.159175991Z" level=warning msg="cleaning up after shim disconnected" id=d612ad2b02288b0ddbe24faacdd87e0decfea30dabea8d6a60c778d7d5261d3e namespace=k8s.io Sep 12 17:15:18.159468 containerd[2020]: time="2025-09-12T17:15:18.159197015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:15:18.513372 kubelet[3436]: I0912 17:15:18.512488 3436 scope.go:117] "RemoveContainer" containerID="2919f70d6ea425ab4d3944115a3191ea07b88581c978f5948a45338d3b7abdc9" Sep 12 17:15:18.513372 kubelet[3436]: I0912 17:15:18.513006 3436 scope.go:117] "RemoveContainer" containerID="d612ad2b02288b0ddbe24faacdd87e0decfea30dabea8d6a60c778d7d5261d3e" Sep 12 17:15:18.513372 kubelet[3436]: E0912 17:15:18.513299 3436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-755d956888-hrlb2_tigera-operator(57405433-a068-4f49-bd0e-10e4ab3fcdfa)\"" pod="tigera-operator/tigera-operator-755d956888-hrlb2" podUID="57405433-a068-4f49-bd0e-10e4ab3fcdfa" Sep 12 17:15:18.516178 containerd[2020]: time="2025-09-12T17:15:18.515736613Z" level=info msg="RemoveContainer for \"2919f70d6ea425ab4d3944115a3191ea07b88581c978f5948a45338d3b7abdc9\"" Sep 12 17:15:18.523233 containerd[2020]: time="2025-09-12T17:15:18.523168057Z" level=info msg="RemoveContainer for \"2919f70d6ea425ab4d3944115a3191ea07b88581c978f5948a45338d3b7abdc9\" returns successfully" Sep 12 17:15:24.576642 kubelet[3436]: E0912 17:15:24.574590 3436 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-188?timeout=10s\": context deadline exceeded"