Sep 9 23:43:24.099556 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 9 23:43:24.099604 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 22:10:22 -00 2025 Sep 9 23:43:24.099628 kernel: KASLR disabled due to lack of seed Sep 9 23:43:24.099645 kernel: efi: EFI v2.7 by EDK II Sep 9 23:43:24.099661 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Sep 9 23:43:24.099676 kernel: secureboot: Secure boot disabled Sep 9 23:43:24.099693 kernel: ACPI: Early table checksum verification disabled Sep 9 23:43:24.099708 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 9 23:43:24.099724 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 9 23:43:24.099739 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 9 23:43:24.099754 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 9 23:43:24.099774 kernel: ACPI: FACS 0x0000000078630000 000040 Sep 9 23:43:24.099789 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 9 23:43:24.099804 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 9 23:43:24.099822 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 9 23:43:24.099838 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 9 23:43:24.099859 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 9 23:43:24.099875 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 9 23:43:24.099891 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 9 23:43:24.099907 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 9 23:43:24.099922 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 9 23:43:24.099939 kernel: printk: legacy bootconsole [uart0] enabled Sep 9 23:43:24.099954 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 23:43:24.099970 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 9 23:43:24.099987 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Sep 9 23:43:24.100002 kernel: Zone ranges: Sep 9 23:43:24.100018 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 9 23:43:24.100038 kernel: DMA32 empty Sep 9 23:43:24.100054 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 9 23:43:24.100070 kernel: Device empty Sep 9 23:43:24.100085 kernel: Movable zone start for each node Sep 9 23:43:24.100101 kernel: Early memory node ranges Sep 9 23:43:24.100117 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 9 23:43:24.100133 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 9 23:43:24.100150 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 9 23:43:24.100165 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 9 23:43:24.100182 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 9 23:43:24.100198 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 9 23:43:24.100215 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 9 23:43:24.100237 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 9 23:43:24.100260 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 9 23:43:24.100278 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 9 23:43:24.100296 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Sep 9 23:43:24.100314 kernel: psci: probing for conduit method from ACPI. Sep 9 23:43:24.100337 kernel: psci: PSCIv1.0 detected in firmware. Sep 9 23:43:24.100354 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 23:43:24.100396 kernel: psci: Trusted OS migration not required Sep 9 23:43:24.100420 kernel: psci: SMC Calling Convention v1.1 Sep 9 23:43:24.100440 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 9 23:43:24.100457 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 23:43:24.100475 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 23:43:24.100493 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 9 23:43:24.100510 kernel: Detected PIPT I-cache on CPU0 Sep 9 23:43:24.100528 kernel: CPU features: detected: GIC system register CPU interface Sep 9 23:43:24.100545 kernel: CPU features: detected: Spectre-v2 Sep 9 23:43:24.100569 kernel: CPU features: detected: Spectre-v3a Sep 9 23:43:24.100587 kernel: CPU features: detected: Spectre-BHB Sep 9 23:43:24.100603 kernel: CPU features: detected: ARM erratum 1742098 Sep 9 23:43:24.100620 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 9 23:43:24.100636 kernel: alternatives: applying boot alternatives Sep 9 23:43:24.100655 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:43:24.100674 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 23:43:24.100691 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 23:43:24.100708 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 23:43:24.100724 kernel: Fallback order for Node 0: 0 Sep 9 23:43:24.100745 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Sep 9 23:43:24.100763 kernel: Policy zone: Normal Sep 9 23:43:24.100779 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 23:43:24.100796 kernel: software IO TLB: area num 2. Sep 9 23:43:24.100812 kernel: software IO TLB: mapped [mem 0x000000006c600000-0x0000000070600000] (64MB) Sep 9 23:43:24.100829 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 9 23:43:24.100846 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 23:43:24.100864 kernel: rcu: RCU event tracing is enabled. Sep 9 23:43:24.100881 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 9 23:43:24.100898 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 23:43:24.100915 kernel: Tracing variant of Tasks RCU enabled. Sep 9 23:43:24.100933 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 23:43:24.100954 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 9 23:43:24.100971 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 23:43:24.100988 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 23:43:24.101005 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 23:43:24.101021 kernel: GICv3: 96 SPIs implemented Sep 9 23:43:24.101038 kernel: GICv3: 0 Extended SPIs implemented Sep 9 23:43:24.101054 kernel: Root IRQ handler: gic_handle_irq Sep 9 23:43:24.101071 kernel: GICv3: GICv3 features: 16 PPIs Sep 9 23:43:24.101087 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 23:43:24.101104 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 9 23:43:24.101120 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 9 23:43:24.101137 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 23:43:24.101159 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Sep 9 23:43:24.101176 kernel: GICv3: using LPI property table @0x0000000400110000 Sep 9 23:43:24.101192 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 9 23:43:24.101209 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Sep 9 23:43:24.101226 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 23:43:24.101242 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 9 23:43:24.101260 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 9 23:43:24.101277 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 9 23:43:24.101294 kernel: Console: colour dummy device 80x25 Sep 9 23:43:24.101311 kernel: printk: legacy console [tty1] enabled Sep 9 23:43:24.101329 kernel: ACPI: Core revision 20240827 Sep 9 23:43:24.101351 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 9 23:43:24.103402 kernel: pid_max: default: 32768 minimum: 301 Sep 9 23:43:24.103459 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 23:43:24.103478 kernel: landlock: Up and running. Sep 9 23:43:24.103496 kernel: SELinux: Initializing. Sep 9 23:43:24.103514 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:43:24.103531 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:43:24.103549 kernel: rcu: Hierarchical SRCU implementation. Sep 9 23:43:24.103567 kernel: rcu: Max phase no-delay instances is 400. Sep 9 23:43:24.103594 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 23:43:24.103611 kernel: Remapping and enabling EFI services. Sep 9 23:43:24.103629 kernel: smp: Bringing up secondary CPUs ... Sep 9 23:43:24.103646 kernel: Detected PIPT I-cache on CPU1 Sep 9 23:43:24.103663 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 9 23:43:24.103680 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Sep 9 23:43:24.103698 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 9 23:43:24.103715 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 23:43:24.103732 kernel: SMP: Total of 2 processors activated. Sep 9 23:43:24.103763 kernel: CPU: All CPU(s) started at EL1 Sep 9 23:43:24.103782 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 23:43:24.103804 kernel: CPU features: detected: 32-bit EL1 Support Sep 9 23:43:24.103822 kernel: CPU features: detected: CRC32 instructions Sep 9 23:43:24.103840 kernel: alternatives: applying system-wide alternatives Sep 9 23:43:24.103859 kernel: Memory: 3797096K/4030464K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 212024K reserved, 16384K cma-reserved) Sep 9 23:43:24.103878 kernel: devtmpfs: initialized Sep 9 23:43:24.103901 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 23:43:24.103920 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 9 23:43:24.103938 kernel: 17056 pages in range for non-PLT usage Sep 9 23:43:24.103956 kernel: 508576 pages in range for PLT usage Sep 9 23:43:24.103974 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 23:43:24.103991 kernel: SMBIOS 3.0.0 present. Sep 9 23:43:24.104009 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 9 23:43:24.104027 kernel: DMI: Memory slots populated: 0/0 Sep 9 23:43:24.104045 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 23:43:24.104067 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 23:43:24.104085 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 23:43:24.104103 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 23:43:24.104121 kernel: audit: initializing netlink subsys (disabled) Sep 9 23:43:24.104139 kernel: audit: type=2000 audit(0.228:1): state=initialized audit_enabled=0 res=1 Sep 9 23:43:24.104156 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 23:43:24.104174 kernel: cpuidle: using governor menu Sep 9 23:43:24.104192 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 23:43:24.104210 kernel: ASID allocator initialised with 65536 entries Sep 9 23:43:24.104232 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 23:43:24.104250 kernel: Serial: AMBA PL011 UART driver Sep 9 23:43:24.104268 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 23:43:24.104286 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 23:43:24.104303 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 23:43:24.104321 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 23:43:24.104339 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 23:43:24.104357 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 23:43:24.104412 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 23:43:24.104440 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 23:43:24.104458 kernel: ACPI: Added _OSI(Module Device) Sep 9 23:43:24.104476 kernel: ACPI: Added _OSI(Processor Device) Sep 9 23:43:24.104494 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 23:43:24.104512 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 23:43:24.104531 kernel: ACPI: Interpreter enabled Sep 9 23:43:24.104548 kernel: ACPI: Using GIC for interrupt routing Sep 9 23:43:24.104566 kernel: ACPI: MCFG table detected, 1 entries Sep 9 23:43:24.104584 kernel: ACPI: CPU0 has been hot-added Sep 9 23:43:24.104608 kernel: ACPI: CPU1 has been hot-added Sep 9 23:43:24.104626 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 9 23:43:24.104932 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 23:43:24.105122 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 23:43:24.105305 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 23:43:24.107605 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 9 23:43:24.107815 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 9 23:43:24.107850 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 9 23:43:24.107870 kernel: acpiphp: Slot [1] registered Sep 9 23:43:24.107889 kernel: acpiphp: Slot [2] registered Sep 9 23:43:24.107907 kernel: acpiphp: Slot [3] registered Sep 9 23:43:24.107925 kernel: acpiphp: Slot [4] registered Sep 9 23:43:24.107942 kernel: acpiphp: Slot [5] registered Sep 9 23:43:24.107961 kernel: acpiphp: Slot [6] registered Sep 9 23:43:24.107979 kernel: acpiphp: Slot [7] registered Sep 9 23:43:24.107996 kernel: acpiphp: Slot [8] registered Sep 9 23:43:24.108014 kernel: acpiphp: Slot [9] registered Sep 9 23:43:24.108037 kernel: acpiphp: Slot [10] registered Sep 9 23:43:24.108055 kernel: acpiphp: Slot [11] registered Sep 9 23:43:24.108072 kernel: acpiphp: Slot [12] registered Sep 9 23:43:24.108090 kernel: acpiphp: Slot [13] registered Sep 9 23:43:24.108108 kernel: acpiphp: Slot [14] registered Sep 9 23:43:24.108126 kernel: acpiphp: Slot [15] registered Sep 9 23:43:24.108144 kernel: acpiphp: Slot [16] registered Sep 9 23:43:24.108161 kernel: acpiphp: Slot [17] registered Sep 9 23:43:24.108179 kernel: acpiphp: Slot [18] registered Sep 9 23:43:24.108201 kernel: acpiphp: Slot [19] registered Sep 9 23:43:24.108219 kernel: acpiphp: Slot [20] registered Sep 9 23:43:24.108237 kernel: acpiphp: Slot [21] registered Sep 9 23:43:24.108254 kernel: acpiphp: Slot [22] registered Sep 9 23:43:24.108272 kernel: acpiphp: Slot [23] registered Sep 9 23:43:24.108290 kernel: acpiphp: Slot [24] registered Sep 9 23:43:24.108308 kernel: acpiphp: Slot [25] registered Sep 9 23:43:24.108326 kernel: acpiphp: Slot [26] registered Sep 9 23:43:24.108344 kernel: acpiphp: Slot [27] registered Sep 9 23:43:24.108361 kernel: acpiphp: Slot [28] registered Sep 9 23:43:24.108414 kernel: acpiphp: Slot [29] registered Sep 9 23:43:24.108433 kernel: acpiphp: Slot [30] registered Sep 9 23:43:24.108452 kernel: acpiphp: Slot [31] registered Sep 9 23:43:24.108470 kernel: PCI host bridge to bus 0000:00 Sep 9 23:43:24.108680 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 9 23:43:24.108852 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 23:43:24.109022 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 9 23:43:24.109189 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 9 23:43:24.111496 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Sep 9 23:43:24.111755 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Sep 9 23:43:24.111977 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Sep 9 23:43:24.112195 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Sep 9 23:43:24.114441 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Sep 9 23:43:24.114695 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 9 23:43:24.114917 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Sep 9 23:43:24.115111 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Sep 9 23:43:24.115330 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Sep 9 23:43:24.115560 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Sep 9 23:43:24.115753 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 9 23:43:24.115943 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Sep 9 23:43:24.116132 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Sep 9 23:43:24.116330 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Sep 9 23:43:24.118586 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Sep 9 23:43:24.118797 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Sep 9 23:43:24.118978 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 9 23:43:24.119148 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 23:43:24.119348 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 9 23:43:24.119447 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 23:43:24.119480 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 23:43:24.119499 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 23:43:24.119518 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 23:43:24.119536 kernel: iommu: Default domain type: Translated Sep 9 23:43:24.119554 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 23:43:24.119572 kernel: efivars: Registered efivars operations Sep 9 23:43:24.119590 kernel: vgaarb: loaded Sep 9 23:43:24.119608 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 23:43:24.119626 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 23:43:24.119649 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 23:43:24.119666 kernel: pnp: PnP ACPI init Sep 9 23:43:24.119893 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 9 23:43:24.119924 kernel: pnp: PnP ACPI: found 1 devices Sep 9 23:43:24.119942 kernel: NET: Registered PF_INET protocol family Sep 9 23:43:24.119960 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 23:43:24.119979 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 23:43:24.119997 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 23:43:24.120021 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 23:43:24.120040 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 23:43:24.120058 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 23:43:24.120076 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:43:24.120094 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:43:24.120113 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 23:43:24.120131 kernel: PCI: CLS 0 bytes, default 64 Sep 9 23:43:24.120149 kernel: kvm [1]: HYP mode not available Sep 9 23:43:24.120166 kernel: Initialise system trusted keyrings Sep 9 23:43:24.120189 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 23:43:24.120207 kernel: Key type asymmetric registered Sep 9 23:43:24.120225 kernel: Asymmetric key parser 'x509' registered Sep 9 23:43:24.120242 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 23:43:24.120261 kernel: io scheduler mq-deadline registered Sep 9 23:43:24.120279 kernel: io scheduler kyber registered Sep 9 23:43:24.120297 kernel: io scheduler bfq registered Sep 9 23:43:24.120552 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 9 23:43:24.120592 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 23:43:24.120612 kernel: ACPI: button: Power Button [PWRB] Sep 9 23:43:24.120630 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 9 23:43:24.120648 kernel: ACPI: button: Sleep Button [SLPB] Sep 9 23:43:24.120666 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 23:43:24.120685 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 9 23:43:24.120885 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 9 23:43:24.120913 kernel: printk: legacy console [ttyS0] disabled Sep 9 23:43:24.120932 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 9 23:43:24.120955 kernel: printk: legacy console [ttyS0] enabled Sep 9 23:43:24.120974 kernel: printk: legacy bootconsole [uart0] disabled Sep 9 23:43:24.120991 kernel: thunder_xcv, ver 1.0 Sep 9 23:43:24.121009 kernel: thunder_bgx, ver 1.0 Sep 9 23:43:24.121028 kernel: nicpf, ver 1.0 Sep 9 23:43:24.121045 kernel: nicvf, ver 1.0 Sep 9 23:43:24.121244 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 23:43:24.123535 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T23:43:23 UTC (1757461403) Sep 9 23:43:24.123588 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 23:43:24.123609 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Sep 9 23:43:24.123628 kernel: NET: Registered PF_INET6 protocol family Sep 9 23:43:24.123647 kernel: watchdog: NMI not fully supported Sep 9 23:43:24.123664 kernel: watchdog: Hard watchdog permanently disabled Sep 9 23:43:24.123683 kernel: Segment Routing with IPv6 Sep 9 23:43:24.123702 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 23:43:24.123720 kernel: NET: Registered PF_PACKET protocol family Sep 9 23:43:24.123738 kernel: Key type dns_resolver registered Sep 9 23:43:24.123762 kernel: registered taskstats version 1 Sep 9 23:43:24.123780 kernel: Loading compiled-in X.509 certificates Sep 9 23:43:24.123799 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 61217a1897415238555e2058a4e44c51622b0f87' Sep 9 23:43:24.123818 kernel: Demotion targets for Node 0: null Sep 9 23:43:24.123837 kernel: Key type .fscrypt registered Sep 9 23:43:24.123855 kernel: Key type fscrypt-provisioning registered Sep 9 23:43:24.123873 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 23:43:24.123890 kernel: ima: Allocated hash algorithm: sha1 Sep 9 23:43:24.123908 kernel: ima: No architecture policies found Sep 9 23:43:24.123931 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 23:43:24.123949 kernel: clk: Disabling unused clocks Sep 9 23:43:24.123967 kernel: PM: genpd: Disabling unused power domains Sep 9 23:43:24.123985 kernel: Warning: unable to open an initial console. Sep 9 23:43:24.124004 kernel: Freeing unused kernel memory: 38912K Sep 9 23:43:24.124021 kernel: Run /init as init process Sep 9 23:43:24.124039 kernel: with arguments: Sep 9 23:43:24.124057 kernel: /init Sep 9 23:43:24.124075 kernel: with environment: Sep 9 23:43:24.124092 kernel: HOME=/ Sep 9 23:43:24.124116 kernel: TERM=linux Sep 9 23:43:24.124133 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 23:43:24.124154 systemd[1]: Successfully made /usr/ read-only. Sep 9 23:43:24.124178 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:43:24.124200 systemd[1]: Detected virtualization amazon. Sep 9 23:43:24.124219 systemd[1]: Detected architecture arm64. Sep 9 23:43:24.124238 systemd[1]: Running in initrd. Sep 9 23:43:24.124261 systemd[1]: No hostname configured, using default hostname. Sep 9 23:43:24.124281 systemd[1]: Hostname set to . Sep 9 23:43:24.124300 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:43:24.124319 systemd[1]: Queued start job for default target initrd.target. Sep 9 23:43:24.124339 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:43:24.124359 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:43:24.124407 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 23:43:24.124431 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:43:24.124459 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 23:43:24.124481 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 23:43:24.124503 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 23:43:24.124523 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 23:43:24.124543 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:43:24.124562 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:43:24.124582 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:43:24.124607 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:43:24.124626 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:43:24.124645 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:43:24.124665 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:43:24.124685 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:43:24.124704 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 23:43:24.124724 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 23:43:24.124744 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:43:24.124768 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:43:24.124788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:43:24.124808 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:43:24.124828 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 23:43:24.124847 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:43:24.124866 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 23:43:24.124886 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 23:43:24.124906 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 23:43:24.124925 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:43:24.124949 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:43:24.124969 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:43:24.124988 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 23:43:24.125009 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:43:24.125033 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 23:43:24.125054 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:43:24.125118 systemd-journald[258]: Collecting audit messages is disabled. Sep 9 23:43:24.125161 systemd-journald[258]: Journal started Sep 9 23:43:24.125207 systemd-journald[258]: Runtime Journal (/run/log/journal/ec2808150a63caee5abed127469cd98c) is 8M, max 75.3M, 67.3M free. Sep 9 23:43:24.105553 systemd-modules-load[259]: Inserted module 'overlay' Sep 9 23:43:24.135405 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:43:24.139720 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:43:24.144816 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 23:43:24.144472 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:43:24.156339 systemd-modules-load[259]: Inserted module 'br_netfilter' Sep 9 23:43:24.159132 kernel: Bridge firewalling registered Sep 9 23:43:24.160440 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:43:24.171713 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:43:24.179209 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:43:24.183278 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:43:24.203583 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:43:24.223174 systemd-tmpfiles[277]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 23:43:24.232548 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:43:24.241931 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:43:24.252652 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:43:24.260472 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:43:24.279154 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:43:24.283504 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 23:43:24.334573 dracut-cmdline[300]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:43:24.369690 systemd-resolved[296]: Positive Trust Anchors: Sep 9 23:43:24.369725 systemd-resolved[296]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:43:24.369787 systemd-resolved[296]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:43:24.483407 kernel: SCSI subsystem initialized Sep 9 23:43:24.491406 kernel: Loading iSCSI transport class v2.0-870. Sep 9 23:43:24.503406 kernel: iscsi: registered transport (tcp) Sep 9 23:43:24.525432 kernel: iscsi: registered transport (qla4xxx) Sep 9 23:43:24.525516 kernel: QLogic iSCSI HBA Driver Sep 9 23:43:24.558548 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:43:24.584623 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:43:24.593921 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:43:24.638420 kernel: random: crng init done Sep 9 23:43:24.637714 systemd-resolved[296]: Defaulting to hostname 'linux'. Sep 9 23:43:24.641779 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:43:24.646545 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:43:24.694697 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 23:43:24.702600 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 23:43:24.786424 kernel: raid6: neonx8 gen() 6530 MB/s Sep 9 23:43:24.803406 kernel: raid6: neonx4 gen() 6571 MB/s Sep 9 23:43:24.820406 kernel: raid6: neonx2 gen() 5452 MB/s Sep 9 23:43:24.837406 kernel: raid6: neonx1 gen() 3959 MB/s Sep 9 23:43:24.854405 kernel: raid6: int64x8 gen() 3660 MB/s Sep 9 23:43:24.871410 kernel: raid6: int64x4 gen() 3701 MB/s Sep 9 23:43:24.888408 kernel: raid6: int64x2 gen() 3603 MB/s Sep 9 23:43:24.906360 kernel: raid6: int64x1 gen() 2756 MB/s Sep 9 23:43:24.906425 kernel: raid6: using algorithm neonx4 gen() 6571 MB/s Sep 9 23:43:24.924411 kernel: raid6: .... xor() 4894 MB/s, rmw enabled Sep 9 23:43:24.924455 kernel: raid6: using neon recovery algorithm Sep 9 23:43:24.933004 kernel: xor: measuring software checksum speed Sep 9 23:43:24.933059 kernel: 8regs : 12943 MB/sec Sep 9 23:43:24.934172 kernel: 32regs : 13045 MB/sec Sep 9 23:43:24.936511 kernel: arm64_neon : 8706 MB/sec Sep 9 23:43:24.936550 kernel: xor: using function: 32regs (13045 MB/sec) Sep 9 23:43:25.027420 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 23:43:25.038465 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:43:25.045419 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:43:25.090448 systemd-udevd[507]: Using default interface naming scheme 'v255'. Sep 9 23:43:25.101076 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:43:25.117161 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 23:43:25.157219 dracut-pre-trigger[520]: rd.md=0: removing MD RAID activation Sep 9 23:43:25.202329 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:43:25.209203 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:43:25.345436 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:43:25.355988 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 23:43:25.493201 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 23:43:25.493267 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 9 23:43:25.500453 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 9 23:43:25.500754 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 9 23:43:25.515418 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:24:7b:3b:77:4d Sep 9 23:43:25.527907 (udev-worker)[559]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:43:25.538255 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 9 23:43:25.538327 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 9 23:43:25.551399 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 9 23:43:25.554580 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:43:25.554713 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:43:25.568171 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 23:43:25.568208 kernel: GPT:9289727 != 16777215 Sep 9 23:43:25.568241 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 23:43:25.568266 kernel: GPT:9289727 != 16777215 Sep 9 23:43:25.568288 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 23:43:25.568311 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 23:43:25.570561 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:43:25.572533 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:43:25.587701 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:43:25.616439 kernel: nvme nvme0: using unchecked data buffer Sep 9 23:43:25.621299 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:43:25.759321 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 9 23:43:25.789167 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 9 23:43:25.812966 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 9 23:43:25.813358 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 9 23:43:25.816056 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 23:43:25.879168 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 9 23:43:25.882055 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:43:25.887276 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:43:25.889909 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:43:25.901191 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 23:43:25.908750 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 23:43:25.941422 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 23:43:25.942141 disk-uuid[691]: Primary Header is updated. Sep 9 23:43:25.942141 disk-uuid[691]: Secondary Entries is updated. Sep 9 23:43:25.942141 disk-uuid[691]: Secondary Header is updated. Sep 9 23:43:25.949174 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:43:25.983413 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 23:43:26.991414 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 23:43:26.991957 disk-uuid[698]: The operation has completed successfully. Sep 9 23:43:27.179585 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 23:43:27.180134 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 23:43:27.278757 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 23:43:27.315608 sh[958]: Success Sep 9 23:43:27.345605 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 23:43:27.345692 kernel: device-mapper: uevent: version 1.0.3 Sep 9 23:43:27.347693 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 23:43:27.359446 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 23:43:27.458100 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 23:43:27.464794 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 23:43:27.494432 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 23:43:27.516415 kernel: BTRFS: device fsid 2bc16190-0dd5-44d6-b331-3d703f5a1d1f devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (981) Sep 9 23:43:27.520698 kernel: BTRFS info (device dm-0): first mount of filesystem 2bc16190-0dd5-44d6-b331-3d703f5a1d1f Sep 9 23:43:27.520746 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:43:27.544403 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 9 23:43:27.544469 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 23:43:27.545702 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 23:43:27.562687 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 23:43:27.566933 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:43:27.569789 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 23:43:27.571307 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 23:43:27.581272 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 23:43:27.630427 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1014) Sep 9 23:43:27.635063 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:43:27.635129 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:43:27.653363 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 23:43:27.653456 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 23:43:27.663431 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:43:27.666731 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 23:43:27.672440 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 23:43:27.762725 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:43:27.776319 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:43:27.856801 systemd-networkd[1150]: lo: Link UP Sep 9 23:43:27.856825 systemd-networkd[1150]: lo: Gained carrier Sep 9 23:43:27.859272 systemd-networkd[1150]: Enumeration completed Sep 9 23:43:27.860166 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:43:27.860174 systemd-networkd[1150]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:43:27.862704 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:43:27.867006 systemd[1]: Reached target network.target - Network. Sep 9 23:43:27.872489 systemd-networkd[1150]: eth0: Link UP Sep 9 23:43:27.872496 systemd-networkd[1150]: eth0: Gained carrier Sep 9 23:43:27.872518 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:43:27.902455 systemd-networkd[1150]: eth0: DHCPv4 address 172.31.18.64/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 9 23:43:28.253162 ignition[1078]: Ignition 2.21.0 Sep 9 23:43:28.253191 ignition[1078]: Stage: fetch-offline Sep 9 23:43:28.256686 ignition[1078]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:43:28.256717 ignition[1078]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:43:28.261301 ignition[1078]: Ignition finished successfully Sep 9 23:43:28.264212 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:43:28.271640 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 23:43:28.310987 ignition[1163]: Ignition 2.21.0 Sep 9 23:43:28.311019 ignition[1163]: Stage: fetch Sep 9 23:43:28.311595 ignition[1163]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:43:28.311621 ignition[1163]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:43:28.311897 ignition[1163]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:43:28.332326 ignition[1163]: PUT result: OK Sep 9 23:43:28.337338 ignition[1163]: parsed url from cmdline: "" Sep 9 23:43:28.337363 ignition[1163]: no config URL provided Sep 9 23:43:28.337402 ignition[1163]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:43:28.337428 ignition[1163]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:43:28.337464 ignition[1163]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:43:28.353285 ignition[1163]: PUT result: OK Sep 9 23:43:28.354270 ignition[1163]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 9 23:43:28.357468 ignition[1163]: GET result: OK Sep 9 23:43:28.358940 ignition[1163]: parsing config with SHA512: 8248fcb7e82cab89fbdf545ade44b1d63f935c5b57e3042d7f55be448d3b58c6112f2f6a0b8bb6ac6d87fe7140b4de3743733c15f56fe94a1e845bba9a42c44b Sep 9 23:43:28.370931 unknown[1163]: fetched base config from "system" Sep 9 23:43:28.371575 unknown[1163]: fetched base config from "system" Sep 9 23:43:28.372251 ignition[1163]: fetch: fetch complete Sep 9 23:43:28.371590 unknown[1163]: fetched user config from "aws" Sep 9 23:43:28.372263 ignition[1163]: fetch: fetch passed Sep 9 23:43:28.372388 ignition[1163]: Ignition finished successfully Sep 9 23:43:28.384886 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 23:43:28.390462 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 23:43:28.428679 ignition[1169]: Ignition 2.21.0 Sep 9 23:43:28.429168 ignition[1169]: Stage: kargs Sep 9 23:43:28.429740 ignition[1169]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:43:28.429763 ignition[1169]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:43:28.429942 ignition[1169]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:43:28.441099 ignition[1169]: PUT result: OK Sep 9 23:43:28.446194 ignition[1169]: kargs: kargs passed Sep 9 23:43:28.447914 ignition[1169]: Ignition finished successfully Sep 9 23:43:28.452331 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 23:43:28.454050 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 23:43:28.491863 ignition[1176]: Ignition 2.21.0 Sep 9 23:43:28.492355 ignition[1176]: Stage: disks Sep 9 23:43:28.493245 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:43:28.493268 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:43:28.493429 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:43:28.501673 ignition[1176]: PUT result: OK Sep 9 23:43:28.506800 ignition[1176]: disks: disks passed Sep 9 23:43:28.506951 ignition[1176]: Ignition finished successfully Sep 9 23:43:28.510182 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 23:43:28.517200 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 23:43:28.522017 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 23:43:28.525187 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:43:28.532163 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:43:28.537105 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:43:28.541047 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 23:43:28.593949 systemd-fsck[1184]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 23:43:28.600793 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 23:43:28.614549 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 23:43:28.746395 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 7cc0d7f3-e4a1-4dc4-8b58-ceece0d874c1 r/w with ordered data mode. Quota mode: none. Sep 9 23:43:28.746698 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 23:43:28.750838 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 23:43:28.758511 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:43:28.763725 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 23:43:28.773330 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 23:43:28.773498 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 23:43:28.783326 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:43:28.800000 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 23:43:28.805698 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 23:43:28.809522 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1203) Sep 9 23:43:28.813321 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:43:28.813354 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:43:28.827358 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 23:43:28.827450 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 23:43:28.831416 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:43:28.902530 systemd-networkd[1150]: eth0: Gained IPv6LL Sep 9 23:43:29.093650 initrd-setup-root[1227]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 23:43:29.102777 initrd-setup-root[1234]: cut: /sysroot/etc/group: No such file or directory Sep 9 23:43:29.111931 initrd-setup-root[1241]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 23:43:29.121016 initrd-setup-root[1248]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 23:43:29.299060 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 23:43:29.306303 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 23:43:29.312614 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 23:43:29.340053 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 23:43:29.343773 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:43:29.371028 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 23:43:29.390112 ignition[1316]: INFO : Ignition 2.21.0 Sep 9 23:43:29.392147 ignition[1316]: INFO : Stage: mount Sep 9 23:43:29.392147 ignition[1316]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:43:29.392147 ignition[1316]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:43:29.401150 ignition[1316]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:43:29.401150 ignition[1316]: INFO : PUT result: OK Sep 9 23:43:29.408699 ignition[1316]: INFO : mount: mount passed Sep 9 23:43:29.408699 ignition[1316]: INFO : Ignition finished successfully Sep 9 23:43:29.412866 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 23:43:29.419143 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 23:43:29.749392 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:43:29.799415 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1327) Sep 9 23:43:29.803924 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:43:29.804008 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:43:29.811058 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 23:43:29.811139 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 23:43:29.814529 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:43:29.854097 ignition[1344]: INFO : Ignition 2.21.0 Sep 9 23:43:29.854097 ignition[1344]: INFO : Stage: files Sep 9 23:43:29.858466 ignition[1344]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:43:29.858466 ignition[1344]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:43:29.858466 ignition[1344]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:43:29.865925 ignition[1344]: INFO : PUT result: OK Sep 9 23:43:29.871287 ignition[1344]: DEBUG : files: compiled without relabeling support, skipping Sep 9 23:43:29.874688 ignition[1344]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 23:43:29.874688 ignition[1344]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 23:43:29.884433 ignition[1344]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 23:43:29.887683 ignition[1344]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 23:43:29.891158 unknown[1344]: wrote ssh authorized keys file for user: core Sep 9 23:43:29.893637 ignition[1344]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 23:43:29.899082 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 23:43:29.903273 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 9 23:43:29.965303 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 23:43:30.263424 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 23:43:30.263424 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 23:43:30.263424 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 23:43:30.263424 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:43:30.280507 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:43:30.280507 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:43:30.280507 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:43:30.280507 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:43:30.280507 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:43:30.280507 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:43:30.280507 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:43:30.280507 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:43:30.280507 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:43:30.280507 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:43:30.280507 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 9 23:43:30.596894 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 23:43:30.996444 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:43:30.996444 ignition[1344]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 23:43:31.004602 ignition[1344]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:43:31.004602 ignition[1344]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:43:31.004602 ignition[1344]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 23:43:31.004602 ignition[1344]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 9 23:43:31.004602 ignition[1344]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 23:43:31.022147 ignition[1344]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:43:31.022147 ignition[1344]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:43:31.022147 ignition[1344]: INFO : files: files passed Sep 9 23:43:31.022147 ignition[1344]: INFO : Ignition finished successfully Sep 9 23:43:31.036467 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 23:43:31.042578 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 23:43:31.050558 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 23:43:31.068033 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 23:43:31.068247 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 23:43:31.091269 initrd-setup-root-after-ignition[1378]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:43:31.095577 initrd-setup-root-after-ignition[1374]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:43:31.095577 initrd-setup-root-after-ignition[1374]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:43:31.101187 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:43:31.107003 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 23:43:31.110601 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 23:43:31.201545 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 23:43:31.203442 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 23:43:31.208160 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 23:43:31.211336 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 23:43:31.217809 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 23:43:31.223073 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 23:43:31.264576 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:43:31.269458 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 23:43:31.310422 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:43:31.315701 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:43:31.318909 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 23:43:31.325123 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 23:43:31.325595 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:43:31.333410 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 23:43:31.336076 systemd[1]: Stopped target basic.target - Basic System. Sep 9 23:43:31.341919 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 23:43:31.346986 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:43:31.352275 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 23:43:31.357514 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:43:31.362458 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 23:43:31.365960 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:43:31.372956 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 23:43:31.375501 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 23:43:31.382260 systemd[1]: Stopped target swap.target - Swaps. Sep 9 23:43:31.384546 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 23:43:31.384790 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:43:31.393042 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:43:31.396118 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:43:31.400270 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 23:43:31.402962 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:43:31.403207 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 23:43:31.403458 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 23:43:31.413230 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 23:43:31.413554 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:43:31.415840 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 23:43:31.416067 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 23:43:31.422182 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 23:43:31.445343 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 23:43:31.452603 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 23:43:31.453570 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:43:31.461854 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 23:43:31.465796 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:43:31.478151 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 23:43:31.481177 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 23:43:31.509765 ignition[1398]: INFO : Ignition 2.21.0 Sep 9 23:43:31.509765 ignition[1398]: INFO : Stage: umount Sep 9 23:43:31.514997 ignition[1398]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:43:31.514997 ignition[1398]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:43:31.514997 ignition[1398]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:43:31.527450 ignition[1398]: INFO : PUT result: OK Sep 9 23:43:31.521055 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 23:43:31.540918 ignition[1398]: INFO : umount: umount passed Sep 9 23:43:31.543609 ignition[1398]: INFO : Ignition finished successfully Sep 9 23:43:31.544765 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 23:43:31.545005 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 23:43:31.555053 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 23:43:31.555303 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 23:43:31.559120 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 23:43:31.559353 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 23:43:31.564748 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 23:43:31.564874 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 23:43:31.567319 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 23:43:31.567482 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 23:43:31.570213 systemd[1]: Stopped target network.target - Network. Sep 9 23:43:31.574009 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 23:43:31.574146 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:43:31.576906 systemd[1]: Stopped target paths.target - Path Units. Sep 9 23:43:31.579026 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 23:43:31.590133 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:43:31.593256 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 23:43:31.596531 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 23:43:31.597244 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 23:43:31.597336 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:43:31.597950 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 23:43:31.598035 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:43:31.606479 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 23:43:31.606646 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 23:43:31.609274 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 23:43:31.609430 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 23:43:31.612990 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 23:43:31.613113 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 23:43:31.617769 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 23:43:31.621502 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 23:43:31.662021 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 23:43:31.666758 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 23:43:31.681560 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 23:43:31.682071 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 23:43:31.682604 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 23:43:31.691639 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 23:43:31.694133 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 23:43:31.700970 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 23:43:31.701659 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:43:31.713590 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 23:43:31.722481 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 23:43:31.722637 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:43:31.728897 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:43:31.734003 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:43:31.740670 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 23:43:31.740788 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 23:43:31.743779 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 23:43:31.743895 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:43:31.754245 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:43:31.769688 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:43:31.772683 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:43:31.791182 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 23:43:31.794671 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:43:31.802532 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 23:43:31.804968 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 23:43:31.807831 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 23:43:31.807923 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:43:31.811356 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 23:43:31.811516 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:43:31.816088 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 23:43:31.816214 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 23:43:31.828554 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 23:43:31.828721 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:43:31.841826 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 23:43:31.847041 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 23:43:31.847211 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:43:31.854596 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 23:43:31.854720 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:43:31.868630 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 23:43:31.868772 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:43:31.880229 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 23:43:31.880367 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:43:31.885303 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:43:31.886089 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:43:31.901763 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 23:43:31.901915 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 9 23:43:31.903921 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 23:43:31.904980 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:43:31.906495 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 23:43:31.906759 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 23:43:31.927579 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 23:43:31.929512 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 23:43:31.933963 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 23:43:31.941112 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 23:43:31.976897 systemd[1]: Switching root. Sep 9 23:43:32.047525 systemd-journald[258]: Journal stopped Sep 9 23:43:34.108359 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Sep 9 23:43:34.117600 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 23:43:34.117654 kernel: SELinux: policy capability open_perms=1 Sep 9 23:43:34.117687 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 23:43:34.117717 kernel: SELinux: policy capability always_check_network=0 Sep 9 23:43:34.117748 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 23:43:34.117777 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 23:43:34.117806 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 23:43:34.117835 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 23:43:34.117874 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 23:43:34.117906 kernel: audit: type=1403 audit(1757461412.404:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 23:43:34.117951 systemd[1]: Successfully loaded SELinux policy in 84.953ms. Sep 9 23:43:34.118003 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.947ms. Sep 9 23:43:34.118038 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:43:34.118076 systemd[1]: Detected virtualization amazon. Sep 9 23:43:34.118106 systemd[1]: Detected architecture arm64. Sep 9 23:43:34.118135 systemd[1]: Detected first boot. Sep 9 23:43:34.118166 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:43:34.118200 zram_generator::config[1443]: No configuration found. Sep 9 23:43:34.118238 kernel: NET: Registered PF_VSOCK protocol family Sep 9 23:43:34.118266 systemd[1]: Populated /etc with preset unit settings. Sep 9 23:43:34.118298 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 23:43:34.118328 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 23:43:34.118358 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 23:43:34.122149 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 23:43:34.122192 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 23:43:34.122230 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 23:43:34.122258 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 23:43:34.122292 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 23:43:34.122323 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 23:43:34.122355 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 23:43:34.122428 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 23:43:34.122462 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 23:43:34.122495 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:43:34.122524 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:43:34.122558 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 23:43:34.122588 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 23:43:34.122619 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 23:43:34.122651 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:43:34.122680 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 23:43:34.122707 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:43:34.122737 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:43:34.122766 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 23:43:34.122798 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 23:43:34.122826 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 23:43:34.122853 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 23:43:34.122883 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:43:34.122912 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:43:34.122940 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:43:34.122970 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:43:34.122998 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 23:43:34.123028 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 23:43:34.123060 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 23:43:34.123099 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:43:34.123127 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:43:34.123179 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:43:34.123212 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 23:43:34.123242 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 23:43:34.123270 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 23:43:34.123303 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 23:43:34.123333 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 23:43:34.134617 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 23:43:34.134684 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 23:43:34.134715 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 23:43:34.134744 systemd[1]: Reached target machines.target - Containers. Sep 9 23:43:34.134772 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 23:43:34.134800 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:43:34.134832 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:43:34.134861 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 23:43:34.134899 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:43:34.134927 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:43:34.134964 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:43:34.134992 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 23:43:34.135021 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:43:34.135567 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 23:43:34.135618 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 23:43:34.135647 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 23:43:34.135675 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 23:43:34.135713 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 23:43:34.135742 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:43:34.135773 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:43:34.135801 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:43:34.135829 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:43:34.135857 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 23:43:34.135884 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 23:43:34.135915 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:43:34.136973 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 23:43:34.137023 systemd[1]: Stopped verity-setup.service. Sep 9 23:43:34.137060 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 23:43:34.137091 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 23:43:34.137119 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 23:43:34.137148 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 23:43:34.137179 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 23:43:34.137208 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 23:43:34.137236 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:43:34.137266 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 23:43:34.137297 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 23:43:34.137329 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:43:34.137358 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:43:34.141538 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:43:34.141580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:43:34.141610 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:43:34.141643 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 23:43:34.141673 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 23:43:34.141702 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 23:43:34.141738 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:43:34.141794 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 23:43:34.141823 kernel: loop: module loaded Sep 9 23:43:34.141851 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 23:43:34.141879 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:43:34.141929 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 23:43:34.141961 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:43:34.152804 systemd-journald[1523]: Collecting audit messages is disabled. Sep 9 23:43:34.152893 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 23:43:34.152929 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:43:34.152958 kernel: ACPI: bus type drm_connector registered Sep 9 23:43:34.152990 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 23:43:34.153052 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:43:34.153083 systemd-journald[1523]: Journal started Sep 9 23:43:34.153134 systemd-journald[1523]: Runtime Journal (/run/log/journal/ec2808150a63caee5abed127469cd98c) is 8M, max 75.3M, 67.3M free. Sep 9 23:43:33.469708 systemd[1]: Queued start job for default target multi-user.target. Sep 9 23:43:33.479254 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 9 23:43:33.480065 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 23:43:34.165473 kernel: fuse: init (API version 7.41) Sep 9 23:43:34.165540 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:43:34.165975 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:43:34.166353 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:43:34.172279 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:43:34.175563 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 23:43:34.185869 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:43:34.186259 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:43:34.190660 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 23:43:34.194210 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 23:43:34.194711 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 23:43:34.227570 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 23:43:34.251801 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:43:34.258769 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 23:43:34.272618 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 23:43:34.276614 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:43:34.302500 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 23:43:34.310679 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 23:43:34.314281 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 23:43:34.325781 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 23:43:34.346196 systemd-journald[1523]: Time spent on flushing to /var/log/journal/ec2808150a63caee5abed127469cd98c is 118.398ms for 938 entries. Sep 9 23:43:34.346196 systemd-journald[1523]: System Journal (/var/log/journal/ec2808150a63caee5abed127469cd98c) is 8M, max 195.6M, 187.6M free. Sep 9 23:43:34.506201 systemd-journald[1523]: Received client request to flush runtime journal. Sep 9 23:43:34.506310 kernel: loop0: detected capacity change from 0 to 119320 Sep 9 23:43:34.506358 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 23:43:34.361497 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:43:34.400281 systemd-tmpfiles[1558]: ACLs are not supported, ignoring. Sep 9 23:43:34.400305 systemd-tmpfiles[1558]: ACLs are not supported, ignoring. Sep 9 23:43:34.419792 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:43:34.427952 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 23:43:34.456093 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 23:43:34.483566 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 23:43:34.526506 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 23:43:34.536635 kernel: loop1: detected capacity change from 0 to 100608 Sep 9 23:43:34.579904 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 23:43:34.588662 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:43:34.615356 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:43:34.667445 kernel: loop2: detected capacity change from 0 to 61256 Sep 9 23:43:34.678318 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Sep 9 23:43:34.678361 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Sep 9 23:43:34.694772 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:43:34.779504 kernel: loop3: detected capacity change from 0 to 207008 Sep 9 23:43:34.832418 kernel: loop4: detected capacity change from 0 to 119320 Sep 9 23:43:34.865416 kernel: loop5: detected capacity change from 0 to 100608 Sep 9 23:43:34.889423 kernel: loop6: detected capacity change from 0 to 61256 Sep 9 23:43:34.910418 kernel: loop7: detected capacity change from 0 to 207008 Sep 9 23:43:34.943390 (sd-merge)[1604]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 9 23:43:34.944493 (sd-merge)[1604]: Merged extensions into '/usr'. Sep 9 23:43:34.963957 systemd[1]: Reload requested from client PID 1557 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 23:43:34.964000 systemd[1]: Reloading... Sep 9 23:43:35.049980 ldconfig[1550]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 23:43:35.120611 zram_generator::config[1630]: No configuration found. Sep 9 23:43:35.544812 systemd[1]: Reloading finished in 578 ms. Sep 9 23:43:35.582452 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 23:43:35.585574 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 23:43:35.588964 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 23:43:35.606420 systemd[1]: Starting ensure-sysext.service... Sep 9 23:43:35.614619 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:43:35.625632 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:43:35.661090 systemd[1]: Reload requested from client PID 1683 ('systemctl') (unit ensure-sysext.service)... Sep 9 23:43:35.661121 systemd[1]: Reloading... Sep 9 23:43:35.682757 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 23:43:35.682822 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 23:43:35.683500 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 23:43:35.683994 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 23:43:35.687932 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 23:43:35.690917 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Sep 9 23:43:35.691080 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Sep 9 23:43:35.707586 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:43:35.707615 systemd-tmpfiles[1684]: Skipping /boot Sep 9 23:43:35.731796 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:43:35.731825 systemd-tmpfiles[1684]: Skipping /boot Sep 9 23:43:35.737531 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Sep 9 23:43:35.867419 zram_generator::config[1718]: No configuration found. Sep 9 23:43:36.077120 (udev-worker)[1723]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:43:36.471986 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 23:43:36.472463 systemd[1]: Reloading finished in 810 ms. Sep 9 23:43:36.510836 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:43:36.517500 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:43:36.632463 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:43:36.638751 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 23:43:36.643591 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:43:36.645576 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:43:36.650758 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:43:36.661116 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:43:36.707805 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:43:36.710358 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:43:36.710470 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:43:36.714726 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 23:43:36.723626 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:43:36.731763 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:43:36.735341 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 23:43:36.740712 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 23:43:36.750503 systemd[1]: Finished ensure-sysext.service. Sep 9 23:43:36.753094 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:43:36.753957 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:43:36.765998 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:43:36.768538 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:43:36.775954 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:43:36.776576 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:43:36.812699 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:43:36.819688 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 23:43:36.837518 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 23:43:36.845756 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:43:36.848487 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:43:36.851405 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:43:36.906391 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 23:43:36.934732 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 23:43:36.965161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:43:36.982080 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 23:43:37.012307 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 23:43:37.026405 augenrules[1938]: No rules Sep 9 23:43:37.033965 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:43:37.034973 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:43:37.043144 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:43:37.057313 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 9 23:43:37.068742 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 23:43:37.124067 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 23:43:37.173138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:43:37.183873 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 23:43:37.294815 systemd-networkd[1904]: lo: Link UP Sep 9 23:43:37.295434 systemd-networkd[1904]: lo: Gained carrier Sep 9 23:43:37.298251 systemd-networkd[1904]: Enumeration completed Sep 9 23:43:37.298595 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:43:37.299577 systemd-networkd[1904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:43:37.299722 systemd-networkd[1904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:43:37.307969 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 23:43:37.312227 systemd-resolved[1905]: Positive Trust Anchors: Sep 9 23:43:37.312249 systemd-resolved[1905]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:43:37.312311 systemd-resolved[1905]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:43:37.315954 systemd-networkd[1904]: eth0: Link UP Sep 9 23:43:37.316235 systemd-networkd[1904]: eth0: Gained carrier Sep 9 23:43:37.316251 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 23:43:37.316271 systemd-networkd[1904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:43:37.333490 systemd-networkd[1904]: eth0: DHCPv4 address 172.31.18.64/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 9 23:43:37.337294 systemd-resolved[1905]: Defaulting to hostname 'linux'. Sep 9 23:43:37.340419 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:43:37.343210 systemd[1]: Reached target network.target - Network. Sep 9 23:43:37.346534 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:43:37.349185 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:43:37.351690 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 23:43:37.354482 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 23:43:37.357623 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 23:43:37.360238 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 23:43:37.364577 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 23:43:37.367426 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 23:43:37.367479 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:43:37.369556 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:43:37.375791 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 23:43:37.384083 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 23:43:37.390580 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 23:43:37.393653 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 23:43:37.396473 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 23:43:37.405502 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 23:43:37.408426 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 23:43:37.413452 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 23:43:37.416798 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 23:43:37.420145 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:43:37.422489 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:43:37.424938 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:43:37.425007 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:43:37.427154 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 23:43:37.431926 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 23:43:37.441875 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 23:43:37.446828 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 23:43:37.458745 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 23:43:37.465160 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 23:43:37.467519 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 23:43:37.472876 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 23:43:37.484887 systemd[1]: Started ntpd.service - Network Time Service. Sep 9 23:43:37.494290 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 23:43:37.501446 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 9 23:43:37.523725 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 23:43:37.530792 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 23:43:37.548241 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 23:43:37.552682 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 23:43:37.557777 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 23:43:37.566750 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 23:43:37.574530 jq[1970]: false Sep 9 23:43:37.575677 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 23:43:37.594446 extend-filesystems[1971]: Found /dev/nvme0n1p6 Sep 9 23:43:37.596690 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 23:43:37.600060 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 23:43:37.600537 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 23:43:37.608235 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 23:43:37.609506 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 23:43:37.637504 extend-filesystems[1971]: Found /dev/nvme0n1p9 Sep 9 23:43:37.666536 extend-filesystems[1971]: Checking size of /dev/nvme0n1p9 Sep 9 23:43:37.687143 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 23:43:37.687599 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 23:43:37.712318 jq[1985]: true Sep 9 23:43:37.718475 ntpd[1973]: ntpd 4.2.8p17@1.4004-o Tue Sep 9 21:32:21 UTC 2025 (1): Starting Sep 9 23:43:37.723864 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: ntpd 4.2.8p17@1.4004-o Tue Sep 9 21:32:21 UTC 2025 (1): Starting Sep 9 23:43:37.723864 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 9 23:43:37.723864 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: ---------------------------------------------------- Sep 9 23:43:37.723864 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Sep 9 23:43:37.723864 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 9 23:43:37.723864 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: corporation. Support and training for ntp-4 are Sep 9 23:43:37.723864 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: available at https://www.nwtime.org/support Sep 9 23:43:37.723864 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: ---------------------------------------------------- Sep 9 23:43:37.718555 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 9 23:43:37.718573 ntpd[1973]: ---------------------------------------------------- Sep 9 23:43:37.718590 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Sep 9 23:43:37.718606 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 9 23:43:37.718622 ntpd[1973]: corporation. Support and training for ntp-4 are Sep 9 23:43:37.718639 ntpd[1973]: available at https://www.nwtime.org/support Sep 9 23:43:37.718654 ntpd[1973]: ---------------------------------------------------- Sep 9 23:43:37.729423 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: proto: precision = 0.096 usec (-23) Sep 9 23:43:37.729120 ntpd[1973]: proto: precision = 0.096 usec (-23) Sep 9 23:43:37.733741 ntpd[1973]: basedate set to 2025-08-28 Sep 9 23:43:37.737487 tar[1990]: linux-arm64/LICENSE Sep 9 23:43:37.737487 tar[1990]: linux-arm64/helm Sep 9 23:43:37.737889 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: basedate set to 2025-08-28 Sep 9 23:43:37.737889 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: gps base set to 2025-08-31 (week 2382) Sep 9 23:43:37.733794 ntpd[1973]: gps base set to 2025-08-31 (week 2382) Sep 9 23:43:37.745907 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Sep 9 23:43:37.748884 (ntainerd)[2011]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 23:43:37.750763 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Sep 9 23:43:37.750763 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 9 23:43:37.750763 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Sep 9 23:43:37.750763 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: Listen normally on 3 eth0 172.31.18.64:123 Sep 9 23:43:37.750763 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: Listen normally on 4 lo [::1]:123 Sep 9 23:43:37.745987 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 9 23:43:37.751054 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: bind(21) AF_INET6 fe80::424:7bff:fe3b:774d%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 23:43:37.751054 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: unable to create socket on eth0 (5) for fe80::424:7bff:fe3b:774d%2#123 Sep 9 23:43:37.751054 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: failed to init interface for address fe80::424:7bff:fe3b:774d%2 Sep 9 23:43:37.751054 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: Listening on routing socket on fd #21 for interface updates Sep 9 23:43:37.746230 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Sep 9 23:43:37.746286 ntpd[1973]: Listen normally on 3 eth0 172.31.18.64:123 Sep 9 23:43:37.746349 ntpd[1973]: Listen normally on 4 lo [::1]:123 Sep 9 23:43:37.750778 ntpd[1973]: bind(21) AF_INET6 fe80::424:7bff:fe3b:774d%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 23:43:37.750826 ntpd[1973]: unable to create socket on eth0 (5) for fe80::424:7bff:fe3b:774d%2#123 Sep 9 23:43:37.750851 ntpd[1973]: failed to init interface for address fe80::424:7bff:fe3b:774d%2 Sep 9 23:43:37.750932 ntpd[1973]: Listening on routing socket on fd #21 for interface updates Sep 9 23:43:37.775529 extend-filesystems[1971]: Resized partition /dev/nvme0n1p9 Sep 9 23:43:37.802140 extend-filesystems[2023]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 23:43:37.809331 dbus-daemon[1968]: [system] SELinux support is enabled Sep 9 23:43:37.817546 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 23:43:37.817546 ntpd[1973]: 9 Sep 23:43:37 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 23:43:37.810628 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 23:43:37.812709 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 23:43:37.812756 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 23:43:37.835664 jq[2016]: true Sep 9 23:43:37.825218 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 23:43:37.825334 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 23:43:37.828913 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 23:43:37.828950 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 23:43:37.846720 dbus-daemon[1968]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1904 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 9 23:43:37.856640 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 9 23:43:37.867995 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 23:43:37.889959 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 9 23:43:37.898078 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 9 23:43:37.911728 update_engine[1984]: I20250909 23:43:37.909919 1984 main.cc:92] Flatcar Update Engine starting Sep 9 23:43:37.924802 systemd[1]: Started update-engine.service - Update Engine. Sep 9 23:43:37.929912 update_engine[1984]: I20250909 23:43:37.929836 1984 update_check_scheduler.cc:74] Next update check in 11m34s Sep 9 23:43:37.931217 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 23:43:37.996214 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 9 23:43:38.010944 extend-filesystems[2023]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 9 23:43:38.010944 extend-filesystems[2023]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 23:43:38.010944 extend-filesystems[2023]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 9 23:43:38.043522 extend-filesystems[1971]: Resized filesystem in /dev/nvme0n1p9 Sep 9 23:43:38.014854 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 23:43:38.047536 coreos-metadata[1967]: Sep 09 23:43:38.046 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 9 23:43:38.017585 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 23:43:38.060537 coreos-metadata[1967]: Sep 09 23:43:38.053 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 9 23:43:38.065663 coreos-metadata[1967]: Sep 09 23:43:38.063 INFO Fetch successful Sep 9 23:43:38.065663 coreos-metadata[1967]: Sep 09 23:43:38.063 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 9 23:43:38.065476 systemd-logind[1981]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 23:43:38.065513 systemd-logind[1981]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 9 23:43:38.067883 systemd-logind[1981]: New seat seat0. Sep 9 23:43:38.074559 coreos-metadata[1967]: Sep 09 23:43:38.068 INFO Fetch successful Sep 9 23:43:38.074559 coreos-metadata[1967]: Sep 09 23:43:38.068 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 9 23:43:38.076954 coreos-metadata[1967]: Sep 09 23:43:38.076 INFO Fetch successful Sep 9 23:43:38.076954 coreos-metadata[1967]: Sep 09 23:43:38.076 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 9 23:43:38.078313 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 23:43:38.082227 coreos-metadata[1967]: Sep 09 23:43:38.081 INFO Fetch successful Sep 9 23:43:38.082227 coreos-metadata[1967]: Sep 09 23:43:38.082 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 9 23:43:38.085484 coreos-metadata[1967]: Sep 09 23:43:38.085 INFO Fetch failed with 404: resource not found Sep 9 23:43:38.085716 coreos-metadata[1967]: Sep 09 23:43:38.085 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 9 23:43:38.089709 coreos-metadata[1967]: Sep 09 23:43:38.088 INFO Fetch successful Sep 9 23:43:38.089709 coreos-metadata[1967]: Sep 09 23:43:38.089 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 9 23:43:38.097008 coreos-metadata[1967]: Sep 09 23:43:38.095 INFO Fetch successful Sep 9 23:43:38.097008 coreos-metadata[1967]: Sep 09 23:43:38.095 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 9 23:43:38.098889 bash[2052]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:43:38.103024 coreos-metadata[1967]: Sep 09 23:43:38.102 INFO Fetch successful Sep 9 23:43:38.103024 coreos-metadata[1967]: Sep 09 23:43:38.102 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 9 23:43:38.107979 coreos-metadata[1967]: Sep 09 23:43:38.104 INFO Fetch successful Sep 9 23:43:38.107979 coreos-metadata[1967]: Sep 09 23:43:38.104 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 9 23:43:38.105174 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 23:43:38.112410 coreos-metadata[1967]: Sep 09 23:43:38.110 INFO Fetch successful Sep 9 23:43:38.116786 systemd[1]: Starting sshkeys.service... Sep 9 23:43:38.224049 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 9 23:43:38.236655 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 9 23:43:38.288457 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 23:43:38.292777 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 23:43:38.437010 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 9 23:43:38.439994 dbus-daemon[1968]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2030 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 9 23:43:38.525625 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 9 23:43:38.548396 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 23:43:38.553564 coreos-metadata[2088]: Sep 09 23:43:38.552 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 9 23:43:38.563011 systemd[1]: Starting polkit.service - Authorization Manager... Sep 9 23:43:38.569279 coreos-metadata[2088]: Sep 09 23:43:38.569 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 9 23:43:38.570872 coreos-metadata[2088]: Sep 09 23:43:38.570 INFO Fetch successful Sep 9 23:43:38.570872 coreos-metadata[2088]: Sep 09 23:43:38.570 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 9 23:43:38.572406 coreos-metadata[2088]: Sep 09 23:43:38.571 INFO Fetch successful Sep 9 23:43:38.580178 unknown[2088]: wrote ssh authorized keys file for user: core Sep 9 23:43:38.595175 containerd[2011]: time="2025-09-09T23:43:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 23:43:38.599473 containerd[2011]: time="2025-09-09T23:43:38.598932982Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 23:43:38.657294 update-ssh-keys[2146]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:43:38.670411 containerd[2011]: time="2025-09-09T23:43:38.669519023Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.248µs" Sep 9 23:43:38.670411 containerd[2011]: time="2025-09-09T23:43:38.669588011Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 23:43:38.670411 containerd[2011]: time="2025-09-09T23:43:38.669639779Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 23:43:38.670411 containerd[2011]: time="2025-09-09T23:43:38.669920291Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 23:43:38.670411 containerd[2011]: time="2025-09-09T23:43:38.669957047Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 23:43:38.670411 containerd[2011]: time="2025-09-09T23:43:38.670017815Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:43:38.670411 containerd[2011]: time="2025-09-09T23:43:38.670139543Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:43:38.670411 containerd[2011]: time="2025-09-09T23:43:38.670174943Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:43:38.680781 containerd[2011]: time="2025-09-09T23:43:38.678619955Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:43:38.680781 containerd[2011]: time="2025-09-09T23:43:38.678681599Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:43:38.680781 containerd[2011]: time="2025-09-09T23:43:38.678718379Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:43:38.680781 containerd[2011]: time="2025-09-09T23:43:38.678759251Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 23:43:38.680781 containerd[2011]: time="2025-09-09T23:43:38.678961487Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 23:43:38.680781 containerd[2011]: time="2025-09-09T23:43:38.679405475Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:43:38.680781 containerd[2011]: time="2025-09-09T23:43:38.679491203Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:43:38.680781 containerd[2011]: time="2025-09-09T23:43:38.679526327Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 23:43:38.680781 containerd[2011]: time="2025-09-09T23:43:38.679579319Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 23:43:38.670446 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 9 23:43:38.684867 systemd[1]: Finished sshkeys.service. Sep 9 23:43:38.692824 containerd[2011]: time="2025-09-09T23:43:38.691954271Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 23:43:38.692824 containerd[2011]: time="2025-09-09T23:43:38.692153051Z" level=info msg="metadata content store policy set" policy=shared Sep 9 23:43:38.704656 containerd[2011]: time="2025-09-09T23:43:38.703363007Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 23:43:38.704656 containerd[2011]: time="2025-09-09T23:43:38.703497911Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 23:43:38.704656 containerd[2011]: time="2025-09-09T23:43:38.703528883Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 23:43:38.704656 containerd[2011]: time="2025-09-09T23:43:38.703558595Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 23:43:38.704656 containerd[2011]: time="2025-09-09T23:43:38.703587395Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 23:43:38.704656 containerd[2011]: time="2025-09-09T23:43:38.703618883Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 23:43:38.704656 containerd[2011]: time="2025-09-09T23:43:38.703647515Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 23:43:38.704656 containerd[2011]: time="2025-09-09T23:43:38.703675811Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 23:43:38.704656 containerd[2011]: time="2025-09-09T23:43:38.703715471Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 23:43:38.704656 containerd[2011]: time="2025-09-09T23:43:38.703748051Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 23:43:38.704656 containerd[2011]: time="2025-09-09T23:43:38.703773803Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 23:43:38.704656 containerd[2011]: time="2025-09-09T23:43:38.703804823Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 23:43:38.704656 containerd[2011]: time="2025-09-09T23:43:38.704030831Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 23:43:38.704656 containerd[2011]: time="2025-09-09T23:43:38.704067167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 23:43:38.705266 containerd[2011]: time="2025-09-09T23:43:38.704106311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 23:43:38.705266 containerd[2011]: time="2025-09-09T23:43:38.704137535Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 23:43:38.705266 containerd[2011]: time="2025-09-09T23:43:38.704165579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 23:43:38.705266 containerd[2011]: time="2025-09-09T23:43:38.704198111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 23:43:38.705266 containerd[2011]: time="2025-09-09T23:43:38.704224835Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 23:43:38.705266 containerd[2011]: time="2025-09-09T23:43:38.704263631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 23:43:38.705266 containerd[2011]: time="2025-09-09T23:43:38.704296307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 23:43:38.705266 containerd[2011]: time="2025-09-09T23:43:38.704330615Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 23:43:38.705266 containerd[2011]: time="2025-09-09T23:43:38.704358995Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 23:43:38.709951 containerd[2011]: time="2025-09-09T23:43:38.706505123Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 23:43:38.713948 containerd[2011]: time="2025-09-09T23:43:38.710763527Z" level=info msg="Start snapshots syncer" Sep 9 23:43:38.713948 containerd[2011]: time="2025-09-09T23:43:38.710920319Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 23:43:38.713948 containerd[2011]: time="2025-09-09T23:43:38.711365735Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 23:43:38.714287 containerd[2011]: time="2025-09-09T23:43:38.713756207Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 23:43:38.715639 containerd[2011]: time="2025-09-09T23:43:38.714471311Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 23:43:38.717101 containerd[2011]: time="2025-09-09T23:43:38.717049115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 23:43:38.717292 containerd[2011]: time="2025-09-09T23:43:38.717262631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 23:43:38.717445 containerd[2011]: time="2025-09-09T23:43:38.717417707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 23:43:38.718395 containerd[2011]: time="2025-09-09T23:43:38.717579143Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 23:43:38.718503 containerd[2011]: time="2025-09-09T23:43:38.717617975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 23:43:38.718628 containerd[2011]: time="2025-09-09T23:43:38.718601387Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 23:43:38.718750 containerd[2011]: time="2025-09-09T23:43:38.718723835Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 23:43:38.718913 containerd[2011]: time="2025-09-09T23:43:38.718873859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 23:43:38.719881 containerd[2011]: time="2025-09-09T23:43:38.719503823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 23:43:38.720409 containerd[2011]: time="2025-09-09T23:43:38.720070895Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 23:43:38.721978 ntpd[1973]: bind(24) AF_INET6 fe80::424:7bff:fe3b:774d%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 23:43:38.724821 ntpd[1973]: 9 Sep 23:43:38 ntpd[1973]: bind(24) AF_INET6 fe80::424:7bff:fe3b:774d%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 23:43:38.724821 ntpd[1973]: 9 Sep 23:43:38 ntpd[1973]: unable to create socket on eth0 (6) for fe80::424:7bff:fe3b:774d%2#123 Sep 9 23:43:38.724821 ntpd[1973]: 9 Sep 23:43:38 ntpd[1973]: failed to init interface for address fe80::424:7bff:fe3b:774d%2 Sep 9 23:43:38.722041 ntpd[1973]: unable to create socket on eth0 (6) for fe80::424:7bff:fe3b:774d%2#123 Sep 9 23:43:38.722067 ntpd[1973]: failed to init interface for address fe80::424:7bff:fe3b:774d%2 Sep 9 23:43:38.726283 containerd[2011]: time="2025-09-09T23:43:38.722033099Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:43:38.726283 containerd[2011]: time="2025-09-09T23:43:38.725404763Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:43:38.726283 containerd[2011]: time="2025-09-09T23:43:38.725437751Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:43:38.726283 containerd[2011]: time="2025-09-09T23:43:38.725486579Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:43:38.726283 containerd[2011]: time="2025-09-09T23:43:38.725512919Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 23:43:38.726283 containerd[2011]: time="2025-09-09T23:43:38.725540231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 23:43:38.726283 containerd[2011]: time="2025-09-09T23:43:38.725592767Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 23:43:38.726283 containerd[2011]: time="2025-09-09T23:43:38.725784275Z" level=info msg="runtime interface created" Sep 9 23:43:38.726283 containerd[2011]: time="2025-09-09T23:43:38.725802179Z" level=info msg="created NRI interface" Sep 9 23:43:38.726283 containerd[2011]: time="2025-09-09T23:43:38.725833091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 23:43:38.726283 containerd[2011]: time="2025-09-09T23:43:38.725888183Z" level=info msg="Connect containerd service" Sep 9 23:43:38.726283 containerd[2011]: time="2025-09-09T23:43:38.726144131Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 23:43:38.735161 containerd[2011]: time="2025-09-09T23:43:38.734310923Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:43:39.006323 locksmithd[2039]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 23:43:39.038934 containerd[2011]: time="2025-09-09T23:43:39.036469101Z" level=info msg="Start subscribing containerd event" Sep 9 23:43:39.038934 containerd[2011]: time="2025-09-09T23:43:39.038739333Z" level=info msg="Start recovering state" Sep 9 23:43:39.039593 containerd[2011]: time="2025-09-09T23:43:39.038559381Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 23:43:39.042400 containerd[2011]: time="2025-09-09T23:43:39.041459433Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 23:43:39.043856 containerd[2011]: time="2025-09-09T23:43:39.043501977Z" level=info msg="Start event monitor" Sep 9 23:43:39.043856 containerd[2011]: time="2025-09-09T23:43:39.043547049Z" level=info msg="Start cni network conf syncer for default" Sep 9 23:43:39.043856 containerd[2011]: time="2025-09-09T23:43:39.043574349Z" level=info msg="Start streaming server" Sep 9 23:43:39.043856 containerd[2011]: time="2025-09-09T23:43:39.043594365Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 23:43:39.043856 containerd[2011]: time="2025-09-09T23:43:39.043611117Z" level=info msg="runtime interface starting up..." Sep 9 23:43:39.043856 containerd[2011]: time="2025-09-09T23:43:39.043625157Z" level=info msg="starting plugins..." Sep 9 23:43:39.043856 containerd[2011]: time="2025-09-09T23:43:39.043654833Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 23:43:39.051216 containerd[2011]: time="2025-09-09T23:43:39.044595537Z" level=info msg="containerd successfully booted in 0.454683s" Sep 9 23:43:39.045537 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 23:43:39.080471 systemd-networkd[1904]: eth0: Gained IPv6LL Sep 9 23:43:39.089213 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 23:43:39.093092 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 23:43:39.100083 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 9 23:43:39.109870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:43:39.118999 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 23:43:39.144040 polkitd[2141]: Started polkitd version 126 Sep 9 23:43:39.179713 polkitd[2141]: Loading rules from directory /etc/polkit-1/rules.d Sep 9 23:43:39.180304 polkitd[2141]: Loading rules from directory /run/polkit-1/rules.d Sep 9 23:43:39.180401 polkitd[2141]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 23:43:39.182403 polkitd[2141]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 9 23:43:39.182566 polkitd[2141]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 23:43:39.182749 polkitd[2141]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 9 23:43:39.191551 polkitd[2141]: Finished loading, compiling and executing 2 rules Sep 9 23:43:39.193692 systemd[1]: Started polkit.service - Authorization Manager. Sep 9 23:43:39.200829 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 9 23:43:39.202954 polkitd[2141]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 9 23:43:39.256611 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 23:43:39.283424 systemd-hostnamed[2030]: Hostname set to (transient) Sep 9 23:43:39.285107 systemd-resolved[1905]: System hostname changed to 'ip-172-31-18-64'. Sep 9 23:43:39.349559 amazon-ssm-agent[2183]: Initializing new seelog logger Sep 9 23:43:39.349986 amazon-ssm-agent[2183]: New Seelog Logger Creation Complete Sep 9 23:43:39.349986 amazon-ssm-agent[2183]: 2025/09/09 23:43:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:39.349986 amazon-ssm-agent[2183]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:39.354096 amazon-ssm-agent[2183]: 2025/09/09 23:43:39 processing appconfig overrides Sep 9 23:43:39.354096 amazon-ssm-agent[2183]: 2025/09/09 23:43:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:39.354096 amazon-ssm-agent[2183]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:39.354096 amazon-ssm-agent[2183]: 2025/09/09 23:43:39 processing appconfig overrides Sep 9 23:43:39.354096 amazon-ssm-agent[2183]: 2025/09/09 23:43:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:39.354096 amazon-ssm-agent[2183]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:39.354096 amazon-ssm-agent[2183]: 2025/09/09 23:43:39 processing appconfig overrides Sep 9 23:43:39.356618 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.3530 INFO Proxy environment variables: Sep 9 23:43:39.361593 amazon-ssm-agent[2183]: 2025/09/09 23:43:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:39.361593 amazon-ssm-agent[2183]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:39.361766 amazon-ssm-agent[2183]: 2025/09/09 23:43:39 processing appconfig overrides Sep 9 23:43:39.459587 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.3531 INFO https_proxy: Sep 9 23:43:39.559523 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.3531 INFO http_proxy: Sep 9 23:43:39.660747 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.3531 INFO no_proxy: Sep 9 23:43:39.734716 tar[1990]: linux-arm64/README.md Sep 9 23:43:39.758506 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.3533 INFO Checking if agent identity type OnPrem can be assumed Sep 9 23:43:39.775179 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 23:43:39.791641 sshd_keygen[2008]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 23:43:39.842431 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 23:43:39.850787 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 23:43:39.854866 systemd[1]: Started sshd@0-172.31.18.64:22-139.178.89.65:35050.service - OpenSSH per-connection server daemon (139.178.89.65:35050). Sep 9 23:43:39.859688 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.3534 INFO Checking if agent identity type EC2 can be assumed Sep 9 23:43:39.875131 amazon-ssm-agent[2183]: 2025/09/09 23:43:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:39.875131 amazon-ssm-agent[2183]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:39.875131 amazon-ssm-agent[2183]: 2025/09/09 23:43:39 processing appconfig overrides Sep 9 23:43:39.907210 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 23:43:39.910465 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 23:43:39.918916 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.5419 INFO Agent will take identity from EC2 Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.5485 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.5486 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.5486 INFO [amazon-ssm-agent] Starting Core Agent Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.5486 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.5486 INFO [Registrar] Starting registrar module Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.5519 INFO [EC2Identity] Checking disk for registration info Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.5520 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.5520 INFO [EC2Identity] Generating registration keypair Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.8235 INFO [EC2Identity] Checking write access before registering Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.8242 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.8740 INFO [EC2Identity] EC2 registration was successful. Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.8741 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.8742 INFO [CredentialRefresher] credentialRefresher has started Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.8742 INFO [CredentialRefresher] Starting credentials refresher loop Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.9234 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 9 23:43:39.924262 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.9237 INFO [CredentialRefresher] Credentials ready Sep 9 23:43:39.958709 amazon-ssm-agent[2183]: 2025-09-09 23:43:39.9240 INFO [CredentialRefresher] Next credential rotation will be in 29.9999897754 minutes Sep 9 23:43:39.965996 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 23:43:39.974960 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 23:43:39.981918 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 23:43:39.990965 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 23:43:40.112440 sshd[2223]: Accepted publickey for core from 139.178.89.65 port 35050 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:43:40.115903 sshd-session[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:40.129102 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 23:43:40.134543 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 23:43:40.168814 systemd-logind[1981]: New session 1 of user core. Sep 9 23:43:40.183438 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 23:43:40.193545 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 23:43:40.219849 (systemd)[2235]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 23:43:40.225070 systemd-logind[1981]: New session c1 of user core. Sep 9 23:43:40.523169 systemd[2235]: Queued start job for default target default.target. Sep 9 23:43:40.529997 systemd[2235]: Created slice app.slice - User Application Slice. Sep 9 23:43:40.530061 systemd[2235]: Reached target paths.target - Paths. Sep 9 23:43:40.530145 systemd[2235]: Reached target timers.target - Timers. Sep 9 23:43:40.532545 systemd[2235]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 23:43:40.563548 systemd[2235]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 23:43:40.563814 systemd[2235]: Reached target sockets.target - Sockets. Sep 9 23:43:40.563923 systemd[2235]: Reached target basic.target - Basic System. Sep 9 23:43:40.564021 systemd[2235]: Reached target default.target - Main User Target. Sep 9 23:43:40.564086 systemd[2235]: Startup finished in 322ms. Sep 9 23:43:40.564719 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 23:43:40.575666 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 23:43:40.741565 systemd[1]: Started sshd@1-172.31.18.64:22-139.178.89.65:36852.service - OpenSSH per-connection server daemon (139.178.89.65:36852). Sep 9 23:43:40.904901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:40.908521 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 23:43:40.915534 systemd[1]: Startup finished in 3.648s (kernel) + 8.694s (initrd) + 8.596s (userspace) = 20.940s. Sep 9 23:43:40.922084 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:43:40.955476 sshd[2246]: Accepted publickey for core from 139.178.89.65 port 36852 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:43:40.959804 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:40.976466 systemd-logind[1981]: New session 2 of user core. Sep 9 23:43:40.983966 amazon-ssm-agent[2183]: 2025-09-09 23:43:40.9822 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 9 23:43:40.982714 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 23:43:41.086410 amazon-ssm-agent[2183]: 2025-09-09 23:43:40.9953 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2261) started Sep 9 23:43:41.118367 sshd[2262]: Connection closed by 139.178.89.65 port 36852 Sep 9 23:43:41.118873 sshd-session[2246]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:41.130270 systemd[1]: sshd@1-172.31.18.64:22-139.178.89.65:36852.service: Deactivated successfully. Sep 9 23:43:41.134276 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 23:43:41.138534 systemd-logind[1981]: Session 2 logged out. Waiting for processes to exit. Sep 9 23:43:41.158989 systemd[1]: Started sshd@2-172.31.18.64:22-139.178.89.65:36856.service - OpenSSH per-connection server daemon (139.178.89.65:36856). Sep 9 23:43:41.162941 systemd-logind[1981]: Removed session 2. Sep 9 23:43:41.186939 amazon-ssm-agent[2183]: 2025-09-09 23:43:40.9953 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 9 23:43:41.381335 sshd[2278]: Accepted publickey for core from 139.178.89.65 port 36856 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:43:41.386304 sshd-session[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:41.397817 systemd-logind[1981]: New session 3 of user core. Sep 9 23:43:41.406646 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 23:43:41.525045 sshd[2287]: Connection closed by 139.178.89.65 port 36856 Sep 9 23:43:41.527677 sshd-session[2278]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:41.535259 systemd[1]: sshd@2-172.31.18.64:22-139.178.89.65:36856.service: Deactivated successfully. Sep 9 23:43:41.540062 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 23:43:41.543534 systemd-logind[1981]: Session 3 logged out. Waiting for processes to exit. Sep 9 23:43:41.560657 systemd-logind[1981]: Removed session 3. Sep 9 23:43:41.562611 systemd[1]: Started sshd@3-172.31.18.64:22-139.178.89.65:36872.service - OpenSSH per-connection server daemon (139.178.89.65:36872). Sep 9 23:43:41.719543 ntpd[1973]: Listen normally on 7 eth0 [fe80::424:7bff:fe3b:774d%2]:123 Sep 9 23:43:41.720998 ntpd[1973]: 9 Sep 23:43:41 ntpd[1973]: Listen normally on 7 eth0 [fe80::424:7bff:fe3b:774d%2]:123 Sep 9 23:43:41.824929 sshd[2293]: Accepted publickey for core from 139.178.89.65 port 36872 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:43:41.827849 sshd-session[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:41.836734 systemd-logind[1981]: New session 4 of user core. Sep 9 23:43:41.845643 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 23:43:41.979530 sshd[2296]: Connection closed by 139.178.89.65 port 36872 Sep 9 23:43:41.980060 kubelet[2254]: E0909 23:43:41.979144 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:43:41.978970 sshd-session[2293]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:41.986516 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:43:41.987235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:43:41.988183 systemd[1]: kubelet.service: Consumed 1.385s CPU time, 257.7M memory peak. Sep 9 23:43:41.989157 systemd[1]: sshd@3-172.31.18.64:22-139.178.89.65:36872.service: Deactivated successfully. Sep 9 23:43:41.992662 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 23:43:41.995497 systemd-logind[1981]: Session 4 logged out. Waiting for processes to exit. Sep 9 23:43:41.998864 systemd-logind[1981]: Removed session 4. Sep 9 23:43:42.015862 systemd[1]: Started sshd@4-172.31.18.64:22-139.178.89.65:36884.service - OpenSSH per-connection server daemon (139.178.89.65:36884). Sep 9 23:43:42.210616 sshd[2303]: Accepted publickey for core from 139.178.89.65 port 36884 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:43:42.213125 sshd-session[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:42.220923 systemd-logind[1981]: New session 5 of user core. Sep 9 23:43:42.232614 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 23:43:42.352775 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 23:43:42.353356 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:43:42.372262 sudo[2307]: pam_unix(sudo:session): session closed for user root Sep 9 23:43:42.396425 sshd[2306]: Connection closed by 139.178.89.65 port 36884 Sep 9 23:43:42.396441 sshd-session[2303]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:42.403231 systemd-logind[1981]: Session 5 logged out. Waiting for processes to exit. Sep 9 23:43:42.404888 systemd[1]: sshd@4-172.31.18.64:22-139.178.89.65:36884.service: Deactivated successfully. Sep 9 23:43:42.408922 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 23:43:42.411906 systemd-logind[1981]: Removed session 5. Sep 9 23:43:42.437919 systemd[1]: Started sshd@5-172.31.18.64:22-139.178.89.65:36896.service - OpenSSH per-connection server daemon (139.178.89.65:36896). Sep 9 23:43:42.643041 sshd[2313]: Accepted publickey for core from 139.178.89.65 port 36896 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:43:42.645445 sshd-session[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:42.653467 systemd-logind[1981]: New session 6 of user core. Sep 9 23:43:42.660644 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 23:43:42.767281 sudo[2318]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 23:43:42.767918 sudo[2318]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:43:42.775809 sudo[2318]: pam_unix(sudo:session): session closed for user root Sep 9 23:43:42.785424 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 23:43:42.786546 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:43:42.803987 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:43:42.881052 augenrules[2340]: No rules Sep 9 23:43:42.883302 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:43:42.885446 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:43:42.887885 sudo[2317]: pam_unix(sudo:session): session closed for user root Sep 9 23:43:42.911910 sshd[2316]: Connection closed by 139.178.89.65 port 36896 Sep 9 23:43:42.912833 sshd-session[2313]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:42.920155 systemd[1]: sshd@5-172.31.18.64:22-139.178.89.65:36896.service: Deactivated successfully. Sep 9 23:43:42.923116 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 23:43:42.924659 systemd-logind[1981]: Session 6 logged out. Waiting for processes to exit. Sep 9 23:43:42.927554 systemd-logind[1981]: Removed session 6. Sep 9 23:43:42.948770 systemd[1]: Started sshd@6-172.31.18.64:22-139.178.89.65:36898.service - OpenSSH per-connection server daemon (139.178.89.65:36898). Sep 9 23:43:43.146393 sshd[2349]: Accepted publickey for core from 139.178.89.65 port 36898 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:43:43.148687 sshd-session[2349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:43.157462 systemd-logind[1981]: New session 7 of user core. Sep 9 23:43:43.168641 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 23:43:43.270197 sudo[2353]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 23:43:43.271399 sudo[2353]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:43:43.794053 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 23:43:43.809134 (dockerd)[2371]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 23:43:44.185620 dockerd[2371]: time="2025-09-09T23:43:44.185140454Z" level=info msg="Starting up" Sep 9 23:43:44.187758 dockerd[2371]: time="2025-09-09T23:43:44.187682798Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 23:43:44.208838 dockerd[2371]: time="2025-09-09T23:43:44.208774334Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 23:43:44.233358 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport192863151-merged.mount: Deactivated successfully. Sep 9 23:43:44.375644 dockerd[2371]: time="2025-09-09T23:43:44.375338667Z" level=info msg="Loading containers: start." Sep 9 23:43:44.390426 kernel: Initializing XFRM netlink socket Sep 9 23:43:44.722209 (udev-worker)[2394]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:43:44.791091 systemd-networkd[1904]: docker0: Link UP Sep 9 23:43:44.795819 dockerd[2371]: time="2025-09-09T23:43:44.795754061Z" level=info msg="Loading containers: done." Sep 9 23:43:44.823503 dockerd[2371]: time="2025-09-09T23:43:44.823327937Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 23:43:44.823707 dockerd[2371]: time="2025-09-09T23:43:44.823553525Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 23:43:44.823762 dockerd[2371]: time="2025-09-09T23:43:44.823705373Z" level=info msg="Initializing buildkit" Sep 9 23:43:44.859992 dockerd[2371]: time="2025-09-09T23:43:44.859920750Z" level=info msg="Completed buildkit initialization" Sep 9 23:43:44.876818 dockerd[2371]: time="2025-09-09T23:43:44.876736746Z" level=info msg="Daemon has completed initialization" Sep 9 23:43:44.877779 dockerd[2371]: time="2025-09-09T23:43:44.876834546Z" level=info msg="API listen on /run/docker.sock" Sep 9 23:43:44.877224 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 23:43:45.230008 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2975914778-merged.mount: Deactivated successfully. Sep 9 23:43:45.997986 containerd[2011]: time="2025-09-09T23:43:45.997927659Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 23:43:46.582526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount855194647.mount: Deactivated successfully. Sep 9 23:43:47.995954 containerd[2011]: time="2025-09-09T23:43:47.995840214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:47.998158 containerd[2011]: time="2025-09-09T23:43:47.997814145Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328357" Sep 9 23:43:48.000233 containerd[2011]: time="2025-09-09T23:43:48.000155219Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:48.006262 containerd[2011]: time="2025-09-09T23:43:48.005364130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:48.007323 containerd[2011]: time="2025-09-09T23:43:48.007267261Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 2.009172274s" Sep 9 23:43:48.007440 containerd[2011]: time="2025-09-09T23:43:48.007326379Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 9 23:43:48.008932 containerd[2011]: time="2025-09-09T23:43:48.008888011Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 23:43:49.430083 containerd[2011]: time="2025-09-09T23:43:49.428468841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:49.430698 containerd[2011]: time="2025-09-09T23:43:49.430657931Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528552" Sep 9 23:43:49.431857 containerd[2011]: time="2025-09-09T23:43:49.431818815Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:49.436757 containerd[2011]: time="2025-09-09T23:43:49.436706121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:49.438599 containerd[2011]: time="2025-09-09T23:43:49.438534443Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.429428295s" Sep 9 23:43:49.438599 containerd[2011]: time="2025-09-09T23:43:49.438593885Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 9 23:43:49.439312 containerd[2011]: time="2025-09-09T23:43:49.439145116Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 23:43:50.649299 containerd[2011]: time="2025-09-09T23:43:50.649232710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:50.650967 containerd[2011]: time="2025-09-09T23:43:50.650912181Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483527" Sep 9 23:43:50.652425 containerd[2011]: time="2025-09-09T23:43:50.651651091Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:50.656241 containerd[2011]: time="2025-09-09T23:43:50.656163546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:50.658617 containerd[2011]: time="2025-09-09T23:43:50.658171417Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.218967856s" Sep 9 23:43:50.658617 containerd[2011]: time="2025-09-09T23:43:50.658225444Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 9 23:43:50.658911 containerd[2011]: time="2025-09-09T23:43:50.658858353Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 23:43:51.956751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1529067392.mount: Deactivated successfully. Sep 9 23:43:52.237989 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 23:43:52.242665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:43:52.600630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:52.612936 (kubelet)[2664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:43:52.693419 kubelet[2664]: E0909 23:43:52.693318 2664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:43:52.707286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:43:52.707651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:43:52.709065 systemd[1]: kubelet.service: Consumed 320ms CPU time, 107.1M memory peak. Sep 9 23:43:52.761971 containerd[2011]: time="2025-09-09T23:43:52.761888793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:52.763861 containerd[2011]: time="2025-09-09T23:43:52.763787842Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376724" Sep 9 23:43:52.766312 containerd[2011]: time="2025-09-09T23:43:52.766238639Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:52.770666 containerd[2011]: time="2025-09-09T23:43:52.770585507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:52.772006 containerd[2011]: time="2025-09-09T23:43:52.771796372Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 2.112875565s" Sep 9 23:43:52.772006 containerd[2011]: time="2025-09-09T23:43:52.771862417Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 9 23:43:52.772949 containerd[2011]: time="2025-09-09T23:43:52.772852264Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 23:43:53.361829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2348384304.mount: Deactivated successfully. Sep 9 23:43:54.504538 containerd[2011]: time="2025-09-09T23:43:54.504483431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:54.506679 containerd[2011]: time="2025-09-09T23:43:54.506616837Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 9 23:43:54.507024 containerd[2011]: time="2025-09-09T23:43:54.506988543Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:54.511959 containerd[2011]: time="2025-09-09T23:43:54.511908266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:54.514315 containerd[2011]: time="2025-09-09T23:43:54.513983659Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.741083322s" Sep 9 23:43:54.514315 containerd[2011]: time="2025-09-09T23:43:54.514042860Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 23:43:54.514853 containerd[2011]: time="2025-09-09T23:43:54.514819192Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 23:43:54.983085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount578090842.mount: Deactivated successfully. Sep 9 23:43:54.992212 containerd[2011]: time="2025-09-09T23:43:54.992151114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:43:54.993591 containerd[2011]: time="2025-09-09T23:43:54.993538768Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 9 23:43:54.996075 containerd[2011]: time="2025-09-09T23:43:54.995973068Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:43:55.000489 containerd[2011]: time="2025-09-09T23:43:55.000407533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:43:55.002277 containerd[2011]: time="2025-09-09T23:43:55.002199692Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 487.225574ms" Sep 9 23:43:55.002508 containerd[2011]: time="2025-09-09T23:43:55.002249037Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 23:43:55.003350 containerd[2011]: time="2025-09-09T23:43:55.003299670Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 23:43:55.558695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3910065265.mount: Deactivated successfully. Sep 9 23:43:57.715833 containerd[2011]: time="2025-09-09T23:43:57.715773932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:57.718354 containerd[2011]: time="2025-09-09T23:43:57.718307354Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 9 23:43:57.719899 containerd[2011]: time="2025-09-09T23:43:57.719852658Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:57.726331 containerd[2011]: time="2025-09-09T23:43:57.726279482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:57.728365 containerd[2011]: time="2025-09-09T23:43:57.728303262Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.724777759s" Sep 9 23:43:57.728473 containerd[2011]: time="2025-09-09T23:43:57.728361803Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 9 23:44:02.958497 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 23:44:02.963705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:03.291646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:03.305227 (kubelet)[2810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:44:03.386703 kubelet[2810]: E0909 23:44:03.386641 2810 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:44:03.391880 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:44:03.392717 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:44:03.393561 systemd[1]: kubelet.service: Consumed 291ms CPU time, 107.2M memory peak. Sep 9 23:44:05.448916 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:05.449956 systemd[1]: kubelet.service: Consumed 291ms CPU time, 107.2M memory peak. Sep 9 23:44:05.453840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:05.512932 systemd[1]: Reload requested from client PID 2824 ('systemctl') (unit session-7.scope)... Sep 9 23:44:05.512962 systemd[1]: Reloading... Sep 9 23:44:05.772448 zram_generator::config[2872]: No configuration found. Sep 9 23:44:06.230749 systemd[1]: Reloading finished in 717 ms. Sep 9 23:44:06.341917 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 23:44:06.342085 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 23:44:06.342658 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:06.342730 systemd[1]: kubelet.service: Consumed 218ms CPU time, 94.9M memory peak. Sep 9 23:44:06.345695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:06.674285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:06.688939 (kubelet)[2932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:44:06.764910 kubelet[2932]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:44:06.764910 kubelet[2932]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:44:06.764910 kubelet[2932]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:44:06.765487 kubelet[2932]: I0909 23:44:06.765009 2932 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:44:07.674267 kubelet[2932]: I0909 23:44:07.674191 2932 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 23:44:07.674267 kubelet[2932]: I0909 23:44:07.674244 2932 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:44:07.674822 kubelet[2932]: I0909 23:44:07.674773 2932 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 23:44:07.731364 kubelet[2932]: E0909 23:44:07.731264 2932 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.64:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:07.734596 kubelet[2932]: I0909 23:44:07.734533 2932 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:44:07.749155 kubelet[2932]: I0909 23:44:07.748550 2932 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:44:07.754649 kubelet[2932]: I0909 23:44:07.754616 2932 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:44:07.755313 kubelet[2932]: I0909 23:44:07.755271 2932 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:44:07.755700 kubelet[2932]: I0909 23:44:07.755431 2932 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-64","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:44:07.756056 kubelet[2932]: I0909 23:44:07.756036 2932 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:44:07.756152 kubelet[2932]: I0909 23:44:07.756135 2932 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 23:44:07.756588 kubelet[2932]: I0909 23:44:07.756569 2932 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:44:07.763876 kubelet[2932]: I0909 23:44:07.763841 2932 kubelet.go:446] "Attempting to sync node with API server" Sep 9 23:44:07.764206 kubelet[2932]: I0909 23:44:07.764033 2932 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:44:07.764206 kubelet[2932]: I0909 23:44:07.764081 2932 kubelet.go:352] "Adding apiserver pod source" Sep 9 23:44:07.764206 kubelet[2932]: I0909 23:44:07.764112 2932 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:44:07.767445 kubelet[2932]: W0909 23:44:07.766958 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-64&limit=500&resourceVersion=0": dial tcp 172.31.18.64:6443: connect: connection refused Sep 9 23:44:07.767445 kubelet[2932]: E0909 23:44:07.767070 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-64&limit=500&resourceVersion=0\": dial tcp 172.31.18.64:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:07.769819 kubelet[2932]: W0909 23:44:07.769691 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.64:6443: connect: connection refused Sep 9 23:44:07.769819 kubelet[2932]: E0909 23:44:07.769761 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.64:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:07.770174 kubelet[2932]: I0909 23:44:07.770150 2932 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:44:07.771311 kubelet[2932]: I0909 23:44:07.771273 2932 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:44:07.771693 kubelet[2932]: W0909 23:44:07.771673 2932 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 23:44:07.775327 kubelet[2932]: I0909 23:44:07.775281 2932 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:44:07.775923 kubelet[2932]: I0909 23:44:07.775444 2932 server.go:1287] "Started kubelet" Sep 9 23:44:07.784934 kubelet[2932]: E0909 23:44:07.783395 2932 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.64:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.64:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-64.1863c1e01518df58 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-64,UID:ip-172-31-18-64,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-64,},FirstTimestamp:2025-09-09 23:44:07.775412056 +0000 UTC m=+1.079078362,LastTimestamp:2025-09-09 23:44:07.775412056 +0000 UTC m=+1.079078362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-64,}" Sep 9 23:44:07.784934 kubelet[2932]: I0909 23:44:07.784787 2932 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:44:07.792736 kubelet[2932]: I0909 23:44:07.792705 2932 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:44:07.792959 kubelet[2932]: I0909 23:44:07.792897 2932 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:44:07.793929 kubelet[2932]: E0909 23:44:07.793893 2932 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-64\" not found" Sep 9 23:44:07.795101 kubelet[2932]: I0909 23:44:07.795045 2932 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:44:07.795226 kubelet[2932]: I0909 23:44:07.795151 2932 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:44:07.795772 kubelet[2932]: E0909 23:44:07.795731 2932 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:44:07.796490 kubelet[2932]: I0909 23:44:07.796411 2932 server.go:479] "Adding debug handlers to kubelet server" Sep 9 23:44:07.798119 kubelet[2932]: I0909 23:44:07.798038 2932 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:44:07.798685 kubelet[2932]: I0909 23:44:07.798645 2932 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:44:07.799430 kubelet[2932]: I0909 23:44:07.798989 2932 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:44:07.800180 kubelet[2932]: E0909 23:44:07.800016 2932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-64?timeout=10s\": dial tcp 172.31.18.64:6443: connect: connection refused" interval="200ms" Sep 9 23:44:07.801066 kubelet[2932]: W0909 23:44:07.800984 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.64:6443: connect: connection refused Sep 9 23:44:07.801213 kubelet[2932]: E0909 23:44:07.801078 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.64:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:07.801321 kubelet[2932]: I0909 23:44:07.801299 2932 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:44:07.801934 kubelet[2932]: I0909 23:44:07.801878 2932 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:44:07.805386 kubelet[2932]: I0909 23:44:07.805289 2932 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:44:07.838870 kubelet[2932]: I0909 23:44:07.838608 2932 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:44:07.838870 kubelet[2932]: I0909 23:44:07.838642 2932 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:44:07.838870 kubelet[2932]: I0909 23:44:07.838672 2932 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:44:07.844092 kubelet[2932]: I0909 23:44:07.844022 2932 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:44:07.845588 kubelet[2932]: I0909 23:44:07.845533 2932 policy_none.go:49] "None policy: Start" Sep 9 23:44:07.845588 kubelet[2932]: I0909 23:44:07.845586 2932 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:44:07.845867 kubelet[2932]: I0909 23:44:07.845611 2932 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:44:07.847949 kubelet[2932]: I0909 23:44:07.847822 2932 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:44:07.847949 kubelet[2932]: I0909 23:44:07.847912 2932 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 23:44:07.848666 kubelet[2932]: I0909 23:44:07.848065 2932 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:44:07.848666 kubelet[2932]: I0909 23:44:07.848133 2932 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 23:44:07.848666 kubelet[2932]: E0909 23:44:07.848311 2932 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:44:07.851258 kubelet[2932]: W0909 23:44:07.850813 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.64:6443: connect: connection refused Sep 9 23:44:07.851606 kubelet[2932]: E0909 23:44:07.851536 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.64:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:07.863139 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 23:44:07.881976 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 23:44:07.888655 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 23:44:07.895078 kubelet[2932]: E0909 23:44:07.895025 2932 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-64\" not found" Sep 9 23:44:07.901081 kubelet[2932]: I0909 23:44:07.901041 2932 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:44:07.901398 kubelet[2932]: I0909 23:44:07.901341 2932 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:44:07.903019 kubelet[2932]: I0909 23:44:07.902914 2932 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:44:07.903479 kubelet[2932]: I0909 23:44:07.903261 2932 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:44:07.907021 kubelet[2932]: E0909 23:44:07.906931 2932 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:44:07.907816 kubelet[2932]: E0909 23:44:07.907688 2932 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-64\" not found" Sep 9 23:44:07.970990 systemd[1]: Created slice kubepods-burstable-poda049502f7716385fc0ff34bce72b0ad2.slice - libcontainer container kubepods-burstable-poda049502f7716385fc0ff34bce72b0ad2.slice. Sep 9 23:44:07.989625 kubelet[2932]: E0909 23:44:07.989358 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-64\" not found" node="ip-172-31-18-64" Sep 9 23:44:07.994581 systemd[1]: Created slice kubepods-burstable-pod3a67eaa8645f9ed06fc2e76801cccb01.slice - libcontainer container kubepods-burstable-pod3a67eaa8645f9ed06fc2e76801cccb01.slice. Sep 9 23:44:07.997970 kubelet[2932]: I0909 23:44:07.997923 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a049502f7716385fc0ff34bce72b0ad2-ca-certs\") pod \"kube-apiserver-ip-172-31-18-64\" (UID: \"a049502f7716385fc0ff34bce72b0ad2\") " pod="kube-system/kube-apiserver-ip-172-31-18-64" Sep 9 23:44:07.998227 kubelet[2932]: I0909 23:44:07.998179 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a67eaa8645f9ed06fc2e76801cccb01-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-64\" (UID: \"3a67eaa8645f9ed06fc2e76801cccb01\") " pod="kube-system/kube-controller-manager-ip-172-31-18-64" Sep 9 23:44:07.999116 kubelet[2932]: I0909 23:44:07.998973 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a049502f7716385fc0ff34bce72b0ad2-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-64\" (UID: \"a049502f7716385fc0ff34bce72b0ad2\") " pod="kube-system/kube-apiserver-ip-172-31-18-64" Sep 9 23:44:07.999790 kubelet[2932]: I0909 23:44:07.999741 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a049502f7716385fc0ff34bce72b0ad2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-64\" (UID: \"a049502f7716385fc0ff34bce72b0ad2\") " pod="kube-system/kube-apiserver-ip-172-31-18-64" Sep 9 23:44:07.999900 kubelet[2932]: I0909 23:44:07.999807 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a67eaa8645f9ed06fc2e76801cccb01-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-64\" (UID: \"3a67eaa8645f9ed06fc2e76801cccb01\") " pod="kube-system/kube-controller-manager-ip-172-31-18-64" Sep 9 23:44:07.999900 kubelet[2932]: I0909 23:44:07.999846 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3a67eaa8645f9ed06fc2e76801cccb01-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-64\" (UID: \"3a67eaa8645f9ed06fc2e76801cccb01\") " pod="kube-system/kube-controller-manager-ip-172-31-18-64" Sep 9 23:44:07.999900 kubelet[2932]: I0909 23:44:07.999883 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a67eaa8645f9ed06fc2e76801cccb01-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-64\" (UID: \"3a67eaa8645f9ed06fc2e76801cccb01\") " pod="kube-system/kube-controller-manager-ip-172-31-18-64" Sep 9 23:44:08.000051 kubelet[2932]: I0909 23:44:07.999925 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a67eaa8645f9ed06fc2e76801cccb01-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-64\" (UID: \"3a67eaa8645f9ed06fc2e76801cccb01\") " pod="kube-system/kube-controller-manager-ip-172-31-18-64" Sep 9 23:44:08.000051 kubelet[2932]: I0909 23:44:07.999963 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4952f04756b92bf716afdbfe785a3ff1-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-64\" (UID: \"4952f04756b92bf716afdbfe785a3ff1\") " pod="kube-system/kube-scheduler-ip-172-31-18-64" Sep 9 23:44:08.000051 kubelet[2932]: E0909 23:44:07.999566 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-64\" not found" node="ip-172-31-18-64" Sep 9 23:44:08.001039 kubelet[2932]: E0909 23:44:08.000978 2932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-64?timeout=10s\": dial tcp 172.31.18.64:6443: connect: connection refused" interval="400ms" Sep 9 23:44:08.007651 systemd[1]: Created slice kubepods-burstable-pod4952f04756b92bf716afdbfe785a3ff1.slice - libcontainer container kubepods-burstable-pod4952f04756b92bf716afdbfe785a3ff1.slice. Sep 9 23:44:08.008602 kubelet[2932]: I0909 23:44:08.008328 2932 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-64" Sep 9 23:44:08.009055 kubelet[2932]: E0909 23:44:08.008993 2932 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.64:6443/api/v1/nodes\": dial tcp 172.31.18.64:6443: connect: connection refused" node="ip-172-31-18-64" Sep 9 23:44:08.013057 kubelet[2932]: E0909 23:44:08.012991 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-64\" not found" node="ip-172-31-18-64" Sep 9 23:44:08.212488 kubelet[2932]: I0909 23:44:08.212447 2932 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-64" Sep 9 23:44:08.213107 kubelet[2932]: E0909 23:44:08.213061 2932 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.64:6443/api/v1/nodes\": dial tcp 172.31.18.64:6443: connect: connection refused" node="ip-172-31-18-64" Sep 9 23:44:08.292026 containerd[2011]: time="2025-09-09T23:44:08.291847880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-64,Uid:a049502f7716385fc0ff34bce72b0ad2,Namespace:kube-system,Attempt:0,}" Sep 9 23:44:08.302062 containerd[2011]: time="2025-09-09T23:44:08.301981941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-64,Uid:3a67eaa8645f9ed06fc2e76801cccb01,Namespace:kube-system,Attempt:0,}" Sep 9 23:44:08.314740 containerd[2011]: time="2025-09-09T23:44:08.314551322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-64,Uid:4952f04756b92bf716afdbfe785a3ff1,Namespace:kube-system,Attempt:0,}" Sep 9 23:44:08.394565 containerd[2011]: time="2025-09-09T23:44:08.394495891Z" level=info msg="connecting to shim f4e5a12e4da038bd0105ed2f0cc9b14170950b941614a2aa155aaa8c2ac23ff8" address="unix:///run/containerd/s/c42628c0d95c65293435d84160a5e69f02df3b695eb42148d39792481246ba91" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:08.396099 containerd[2011]: time="2025-09-09T23:44:08.396040247Z" level=info msg="connecting to shim e49248709385ba748d9712eab7f443b2d4039c81579350f684b37d4f354d9134" address="unix:///run/containerd/s/8c08dcced967a1e1ff437bfbdfd49b6ba3239f416a61618e0207f79ab4007c9f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:08.402340 kubelet[2932]: E0909 23:44:08.402281 2932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-64?timeout=10s\": dial tcp 172.31.18.64:6443: connect: connection refused" interval="800ms" Sep 9 23:44:08.410835 containerd[2011]: time="2025-09-09T23:44:08.410778152Z" level=info msg="connecting to shim 02d20cdb22653063a494af677bcbd585bdd630e964533bb83cdb27a142998da3" address="unix:///run/containerd/s/53a3a9909810b6b038618f78d140bc402fa7d438d4eedb7829a67eb413455fdc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:08.479879 systemd[1]: Started cri-containerd-f4e5a12e4da038bd0105ed2f0cc9b14170950b941614a2aa155aaa8c2ac23ff8.scope - libcontainer container f4e5a12e4da038bd0105ed2f0cc9b14170950b941614a2aa155aaa8c2ac23ff8. Sep 9 23:44:08.498801 systemd[1]: Started cri-containerd-e49248709385ba748d9712eab7f443b2d4039c81579350f684b37d4f354d9134.scope - libcontainer container e49248709385ba748d9712eab7f443b2d4039c81579350f684b37d4f354d9134. Sep 9 23:44:08.519666 systemd[1]: Started cri-containerd-02d20cdb22653063a494af677bcbd585bdd630e964533bb83cdb27a142998da3.scope - libcontainer container 02d20cdb22653063a494af677bcbd585bdd630e964533bb83cdb27a142998da3. Sep 9 23:44:08.618686 kubelet[2932]: I0909 23:44:08.618279 2932 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-64" Sep 9 23:44:08.620549 kubelet[2932]: E0909 23:44:08.620480 2932 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.64:6443/api/v1/nodes\": dial tcp 172.31.18.64:6443: connect: connection refused" node="ip-172-31-18-64" Sep 9 23:44:08.630966 containerd[2011]: time="2025-09-09T23:44:08.630361591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-64,Uid:a049502f7716385fc0ff34bce72b0ad2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e49248709385ba748d9712eab7f443b2d4039c81579350f684b37d4f354d9134\"" Sep 9 23:44:08.636815 containerd[2011]: time="2025-09-09T23:44:08.636740174Z" level=info msg="CreateContainer within sandbox \"e49248709385ba748d9712eab7f443b2d4039c81579350f684b37d4f354d9134\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 23:44:08.658872 containerd[2011]: time="2025-09-09T23:44:08.658812449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-64,Uid:4952f04756b92bf716afdbfe785a3ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4e5a12e4da038bd0105ed2f0cc9b14170950b941614a2aa155aaa8c2ac23ff8\"" Sep 9 23:44:08.669658 containerd[2011]: time="2025-09-09T23:44:08.669566680Z" level=info msg="CreateContainer within sandbox \"f4e5a12e4da038bd0105ed2f0cc9b14170950b941614a2aa155aaa8c2ac23ff8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 23:44:08.672491 containerd[2011]: time="2025-09-09T23:44:08.672274333Z" level=info msg="Container 38a088ca0d5566a0a9ee2c506b1adb1569d07351abd7cd0bb6aafeada2f5399e: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:08.676661 containerd[2011]: time="2025-09-09T23:44:08.676551891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-64,Uid:3a67eaa8645f9ed06fc2e76801cccb01,Namespace:kube-system,Attempt:0,} returns sandbox id \"02d20cdb22653063a494af677bcbd585bdd630e964533bb83cdb27a142998da3\"" Sep 9 23:44:08.682958 containerd[2011]: time="2025-09-09T23:44:08.682899991Z" level=info msg="CreateContainer within sandbox \"02d20cdb22653063a494af677bcbd585bdd630e964533bb83cdb27a142998da3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 23:44:08.696917 containerd[2011]: time="2025-09-09T23:44:08.696840710Z" level=info msg="CreateContainer within sandbox \"e49248709385ba748d9712eab7f443b2d4039c81579350f684b37d4f354d9134\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"38a088ca0d5566a0a9ee2c506b1adb1569d07351abd7cd0bb6aafeada2f5399e\"" Sep 9 23:44:08.701492 containerd[2011]: time="2025-09-09T23:44:08.701337858Z" level=info msg="StartContainer for \"38a088ca0d5566a0a9ee2c506b1adb1569d07351abd7cd0bb6aafeada2f5399e\"" Sep 9 23:44:08.702651 kubelet[2932]: W0909 23:44:08.702556 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.64:6443: connect: connection refused Sep 9 23:44:08.703041 kubelet[2932]: E0909 23:44:08.702780 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.64:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:08.705773 containerd[2011]: time="2025-09-09T23:44:08.705635705Z" level=info msg="Container d3b563eda851382b1f446fb790851b59bdb1c9c3a53c0f5fe6195dc0ad0def1f: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:08.707351 containerd[2011]: time="2025-09-09T23:44:08.707243117Z" level=info msg="connecting to shim 38a088ca0d5566a0a9ee2c506b1adb1569d07351abd7cd0bb6aafeada2f5399e" address="unix:///run/containerd/s/8c08dcced967a1e1ff437bfbdfd49b6ba3239f416a61618e0207f79ab4007c9f" protocol=ttrpc version=3 Sep 9 23:44:08.714146 containerd[2011]: time="2025-09-09T23:44:08.713590077Z" level=info msg="Container 097b681a0025da7af5b6445fc961f369b2eee6c268920de719a8bca2dcd59dd7: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:08.731349 containerd[2011]: time="2025-09-09T23:44:08.731289502Z" level=info msg="CreateContainer within sandbox \"f4e5a12e4da038bd0105ed2f0cc9b14170950b941614a2aa155aaa8c2ac23ff8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d3b563eda851382b1f446fb790851b59bdb1c9c3a53c0f5fe6195dc0ad0def1f\"" Sep 9 23:44:08.733076 containerd[2011]: time="2025-09-09T23:44:08.733032197Z" level=info msg="StartContainer for \"d3b563eda851382b1f446fb790851b59bdb1c9c3a53c0f5fe6195dc0ad0def1f\"" Sep 9 23:44:08.735254 containerd[2011]: time="2025-09-09T23:44:08.735180815Z" level=info msg="CreateContainer within sandbox \"02d20cdb22653063a494af677bcbd585bdd630e964533bb83cdb27a142998da3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"097b681a0025da7af5b6445fc961f369b2eee6c268920de719a8bca2dcd59dd7\"" Sep 9 23:44:08.736003 containerd[2011]: time="2025-09-09T23:44:08.735961301Z" level=info msg="connecting to shim d3b563eda851382b1f446fb790851b59bdb1c9c3a53c0f5fe6195dc0ad0def1f" address="unix:///run/containerd/s/c42628c0d95c65293435d84160a5e69f02df3b695eb42148d39792481246ba91" protocol=ttrpc version=3 Sep 9 23:44:08.736332 containerd[2011]: time="2025-09-09T23:44:08.736198611Z" level=info msg="StartContainer for \"097b681a0025da7af5b6445fc961f369b2eee6c268920de719a8bca2dcd59dd7\"" Sep 9 23:44:08.743778 containerd[2011]: time="2025-09-09T23:44:08.743642560Z" level=info msg="connecting to shim 097b681a0025da7af5b6445fc961f369b2eee6c268920de719a8bca2dcd59dd7" address="unix:///run/containerd/s/53a3a9909810b6b038618f78d140bc402fa7d438d4eedb7829a67eb413455fdc" protocol=ttrpc version=3 Sep 9 23:44:08.755001 systemd[1]: Started cri-containerd-38a088ca0d5566a0a9ee2c506b1adb1569d07351abd7cd0bb6aafeada2f5399e.scope - libcontainer container 38a088ca0d5566a0a9ee2c506b1adb1569d07351abd7cd0bb6aafeada2f5399e. Sep 9 23:44:08.772145 kubelet[2932]: W0909 23:44:08.772043 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.64:6443: connect: connection refused Sep 9 23:44:08.772145 kubelet[2932]: E0909 23:44:08.772141 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.64:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:08.792067 systemd[1]: Started cri-containerd-d3b563eda851382b1f446fb790851b59bdb1c9c3a53c0f5fe6195dc0ad0def1f.scope - libcontainer container d3b563eda851382b1f446fb790851b59bdb1c9c3a53c0f5fe6195dc0ad0def1f. Sep 9 23:44:08.814716 systemd[1]: Started cri-containerd-097b681a0025da7af5b6445fc961f369b2eee6c268920de719a8bca2dcd59dd7.scope - libcontainer container 097b681a0025da7af5b6445fc961f369b2eee6c268920de719a8bca2dcd59dd7. Sep 9 23:44:08.904064 kubelet[2932]: W0909 23:44:08.903872 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-64&limit=500&resourceVersion=0": dial tcp 172.31.18.64:6443: connect: connection refused Sep 9 23:44:08.904064 kubelet[2932]: E0909 23:44:08.903975 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-64&limit=500&resourceVersion=0\": dial tcp 172.31.18.64:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:08.936493 containerd[2011]: time="2025-09-09T23:44:08.936432856Z" level=info msg="StartContainer for \"38a088ca0d5566a0a9ee2c506b1adb1569d07351abd7cd0bb6aafeada2f5399e\" returns successfully" Sep 9 23:44:08.963087 containerd[2011]: time="2025-09-09T23:44:08.963002782Z" level=info msg="StartContainer for \"097b681a0025da7af5b6445fc961f369b2eee6c268920de719a8bca2dcd59dd7\" returns successfully" Sep 9 23:44:09.035115 containerd[2011]: time="2025-09-09T23:44:09.035000688Z" level=info msg="StartContainer for \"d3b563eda851382b1f446fb790851b59bdb1c9c3a53c0f5fe6195dc0ad0def1f\" returns successfully" Sep 9 23:44:09.321329 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 9 23:44:09.425477 kubelet[2932]: I0909 23:44:09.425420 2932 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-64" Sep 9 23:44:09.897916 kubelet[2932]: E0909 23:44:09.897812 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-64\" not found" node="ip-172-31-18-64" Sep 9 23:44:09.903607 kubelet[2932]: E0909 23:44:09.903546 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-64\" not found" node="ip-172-31-18-64" Sep 9 23:44:09.909282 kubelet[2932]: E0909 23:44:09.909227 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-64\" not found" node="ip-172-31-18-64" Sep 9 23:44:10.912216 kubelet[2932]: E0909 23:44:10.912157 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-64\" not found" node="ip-172-31-18-64" Sep 9 23:44:10.913938 kubelet[2932]: E0909 23:44:10.913864 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-64\" not found" node="ip-172-31-18-64" Sep 9 23:44:10.914814 kubelet[2932]: E0909 23:44:10.914765 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-64\" not found" node="ip-172-31-18-64" Sep 9 23:44:11.915492 kubelet[2932]: E0909 23:44:11.915437 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-64\" not found" node="ip-172-31-18-64" Sep 9 23:44:11.916034 kubelet[2932]: E0909 23:44:11.916008 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-64\" not found" node="ip-172-31-18-64" Sep 9 23:44:11.917704 kubelet[2932]: E0909 23:44:11.917647 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-64\" not found" node="ip-172-31-18-64" Sep 9 23:44:12.782872 kubelet[2932]: I0909 23:44:12.782820 2932 apiserver.go:52] "Watching apiserver" Sep 9 23:44:12.917766 kubelet[2932]: E0909 23:44:12.917709 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-64\" not found" node="ip-172-31-18-64" Sep 9 23:44:12.995967 kubelet[2932]: I0909 23:44:12.995917 2932 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:44:13.024136 kubelet[2932]: E0909 23:44:13.024076 2932 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-64\" not found" node="ip-172-31-18-64" Sep 9 23:44:13.154439 kubelet[2932]: I0909 23:44:13.152546 2932 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-64" Sep 9 23:44:13.154439 kubelet[2932]: E0909 23:44:13.152603 2932 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-18-64\": node \"ip-172-31-18-64\" not found" Sep 9 23:44:13.195404 kubelet[2932]: I0909 23:44:13.195342 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-64" Sep 9 23:44:13.267826 kubelet[2932]: E0909 23:44:13.267479 2932 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-64\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-64" Sep 9 23:44:13.267826 kubelet[2932]: I0909 23:44:13.267524 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-64" Sep 9 23:44:13.277611 kubelet[2932]: E0909 23:44:13.277552 2932 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-64\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-64" Sep 9 23:44:13.277959 kubelet[2932]: I0909 23:44:13.277802 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-64" Sep 9 23:44:13.288559 kubelet[2932]: E0909 23:44:13.288473 2932 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-64\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-64" Sep 9 23:44:15.170268 systemd[1]: Reload requested from client PID 3210 ('systemctl') (unit session-7.scope)... Sep 9 23:44:15.170726 systemd[1]: Reloading... Sep 9 23:44:15.447410 zram_generator::config[3257]: No configuration found. Sep 9 23:44:15.961757 systemd[1]: Reloading finished in 790 ms. Sep 9 23:44:16.026716 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:16.045365 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 23:44:16.045915 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:16.046010 systemd[1]: kubelet.service: Consumed 1.841s CPU time, 126.6M memory peak. Sep 9 23:44:16.049482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:16.397247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:16.413514 (kubelet)[3314]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:44:16.527322 kubelet[3314]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:44:16.527322 kubelet[3314]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:44:16.527322 kubelet[3314]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:44:16.527863 kubelet[3314]: I0909 23:44:16.527534 3314 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:44:16.540406 kubelet[3314]: I0909 23:44:16.539942 3314 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 23:44:16.540406 kubelet[3314]: I0909 23:44:16.539987 3314 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:44:16.540839 kubelet[3314]: I0909 23:44:16.540792 3314 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 23:44:16.548744 kubelet[3314]: I0909 23:44:16.548517 3314 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 23:44:16.556129 kubelet[3314]: I0909 23:44:16.556070 3314 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:44:16.566467 kubelet[3314]: I0909 23:44:16.565742 3314 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:44:16.571639 kubelet[3314]: I0909 23:44:16.571586 3314 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:44:16.572020 kubelet[3314]: I0909 23:44:16.571968 3314 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:44:16.572326 kubelet[3314]: I0909 23:44:16.572020 3314 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-64","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:44:16.573149 kubelet[3314]: I0909 23:44:16.573096 3314 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:44:16.573149 kubelet[3314]: I0909 23:44:16.573139 3314 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 23:44:16.573287 kubelet[3314]: I0909 23:44:16.573229 3314 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:44:16.573568 kubelet[3314]: I0909 23:44:16.573539 3314 kubelet.go:446] "Attempting to sync node with API server" Sep 9 23:44:16.573659 kubelet[3314]: I0909 23:44:16.573575 3314 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:44:16.578414 kubelet[3314]: I0909 23:44:16.575464 3314 kubelet.go:352] "Adding apiserver pod source" Sep 9 23:44:16.578414 kubelet[3314]: I0909 23:44:16.575518 3314 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:44:16.581401 kubelet[3314]: I0909 23:44:16.580957 3314 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:44:16.583399 kubelet[3314]: I0909 23:44:16.582304 3314 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:44:16.585393 kubelet[3314]: I0909 23:44:16.584291 3314 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:44:16.585601 kubelet[3314]: I0909 23:44:16.585576 3314 server.go:1287] "Started kubelet" Sep 9 23:44:16.593941 kubelet[3314]: I0909 23:44:16.593896 3314 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:44:16.604485 kubelet[3314]: I0909 23:44:16.604425 3314 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:44:16.606231 kubelet[3314]: I0909 23:44:16.606185 3314 server.go:479] "Adding debug handlers to kubelet server" Sep 9 23:44:16.617669 kubelet[3314]: I0909 23:44:16.617574 3314 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:44:16.618183 kubelet[3314]: I0909 23:44:16.618156 3314 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:44:16.621418 kubelet[3314]: I0909 23:44:16.620343 3314 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:44:16.636792 kubelet[3314]: I0909 23:44:16.636749 3314 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:44:16.662344 kubelet[3314]: E0909 23:44:16.638412 3314 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-64\" not found" Sep 9 23:44:16.662564 kubelet[3314]: I0909 23:44:16.655916 3314 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:44:16.668860 kubelet[3314]: I0909 23:44:16.668813 3314 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:44:16.669269 kubelet[3314]: I0909 23:44:16.669021 3314 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 23:44:16.669269 kubelet[3314]: I0909 23:44:16.669063 3314 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:44:16.669269 kubelet[3314]: I0909 23:44:16.669078 3314 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 23:44:16.669900 kubelet[3314]: E0909 23:44:16.669525 3314 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:44:16.670151 kubelet[3314]: I0909 23:44:16.670124 3314 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:44:16.671297 kubelet[3314]: I0909 23:44:16.655952 3314 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:44:16.683792 kubelet[3314]: I0909 23:44:16.683165 3314 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:44:16.683792 kubelet[3314]: I0909 23:44:16.683355 3314 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:44:16.695146 kubelet[3314]: E0909 23:44:16.692069 3314 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:44:16.697352 kubelet[3314]: I0909 23:44:16.696737 3314 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:44:16.770247 kubelet[3314]: E0909 23:44:16.770207 3314 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 23:44:16.810311 kubelet[3314]: I0909 23:44:16.810279 3314 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:44:16.810606 kubelet[3314]: I0909 23:44:16.810583 3314 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:44:16.810712 kubelet[3314]: I0909 23:44:16.810696 3314 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:44:16.811092 kubelet[3314]: I0909 23:44:16.811067 3314 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 23:44:16.811231 kubelet[3314]: I0909 23:44:16.811192 3314 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 23:44:16.811324 kubelet[3314]: I0909 23:44:16.811307 3314 policy_none.go:49] "None policy: Start" Sep 9 23:44:16.811445 kubelet[3314]: I0909 23:44:16.811427 3314 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:44:16.811614 kubelet[3314]: I0909 23:44:16.811595 3314 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:44:16.811900 kubelet[3314]: I0909 23:44:16.811882 3314 state_mem.go:75] "Updated machine memory state" Sep 9 23:44:16.825421 kubelet[3314]: I0909 23:44:16.824689 3314 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:44:16.825421 kubelet[3314]: I0909 23:44:16.824955 3314 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:44:16.825421 kubelet[3314]: I0909 23:44:16.824973 3314 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:44:16.825421 kubelet[3314]: I0909 23:44:16.825347 3314 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:44:16.828898 kubelet[3314]: E0909 23:44:16.828864 3314 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:44:16.962605 kubelet[3314]: I0909 23:44:16.962247 3314 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-64" Sep 9 23:44:16.972013 kubelet[3314]: I0909 23:44:16.971105 3314 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-64" Sep 9 23:44:16.972013 kubelet[3314]: I0909 23:44:16.971715 3314 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-64" Sep 9 23:44:16.974397 kubelet[3314]: I0909 23:44:16.974332 3314 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-64" Sep 9 23:44:16.974751 kubelet[3314]: I0909 23:44:16.974517 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a67eaa8645f9ed06fc2e76801cccb01-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-64\" (UID: \"3a67eaa8645f9ed06fc2e76801cccb01\") " pod="kube-system/kube-controller-manager-ip-172-31-18-64" Sep 9 23:44:16.974751 kubelet[3314]: I0909 23:44:16.974716 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a67eaa8645f9ed06fc2e76801cccb01-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-64\" (UID: \"3a67eaa8645f9ed06fc2e76801cccb01\") " pod="kube-system/kube-controller-manager-ip-172-31-18-64" Sep 9 23:44:16.975897 kubelet[3314]: I0909 23:44:16.974758 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4952f04756b92bf716afdbfe785a3ff1-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-64\" (UID: \"4952f04756b92bf716afdbfe785a3ff1\") " pod="kube-system/kube-scheduler-ip-172-31-18-64" Sep 9 23:44:16.975897 kubelet[3314]: I0909 23:44:16.974795 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a049502f7716385fc0ff34bce72b0ad2-ca-certs\") pod \"kube-apiserver-ip-172-31-18-64\" (UID: \"a049502f7716385fc0ff34bce72b0ad2\") " pod="kube-system/kube-apiserver-ip-172-31-18-64" Sep 9 23:44:16.975897 kubelet[3314]: I0909 23:44:16.974832 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a049502f7716385fc0ff34bce72b0ad2-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-64\" (UID: \"a049502f7716385fc0ff34bce72b0ad2\") " pod="kube-system/kube-apiserver-ip-172-31-18-64" Sep 9 23:44:16.975897 kubelet[3314]: I0909 23:44:16.974868 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a049502f7716385fc0ff34bce72b0ad2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-64\" (UID: \"a049502f7716385fc0ff34bce72b0ad2\") " pod="kube-system/kube-apiserver-ip-172-31-18-64" Sep 9 23:44:16.975897 kubelet[3314]: I0909 23:44:16.975468 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3a67eaa8645f9ed06fc2e76801cccb01-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-64\" (UID: \"3a67eaa8645f9ed06fc2e76801cccb01\") " pod="kube-system/kube-controller-manager-ip-172-31-18-64" Sep 9 23:44:16.976232 kubelet[3314]: I0909 23:44:16.975520 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a67eaa8645f9ed06fc2e76801cccb01-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-64\" (UID: \"3a67eaa8645f9ed06fc2e76801cccb01\") " pod="kube-system/kube-controller-manager-ip-172-31-18-64" Sep 9 23:44:16.976232 kubelet[3314]: I0909 23:44:16.975557 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a67eaa8645f9ed06fc2e76801cccb01-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-64\" (UID: \"3a67eaa8645f9ed06fc2e76801cccb01\") " pod="kube-system/kube-controller-manager-ip-172-31-18-64" Sep 9 23:44:16.992561 kubelet[3314]: I0909 23:44:16.992356 3314 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-18-64" Sep 9 23:44:16.992561 kubelet[3314]: I0909 23:44:16.992501 3314 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-64" Sep 9 23:44:17.578054 kubelet[3314]: I0909 23:44:17.576865 3314 apiserver.go:52] "Watching apiserver" Sep 9 23:44:17.670924 kubelet[3314]: I0909 23:44:17.669971 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-64" podStartSLOduration=1.669949177 podStartE2EDuration="1.669949177s" podCreationTimestamp="2025-09-09 23:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:44:17.669089535 +0000 UTC m=+1.243908725" watchObservedRunningTime="2025-09-09 23:44:17.669949177 +0000 UTC m=+1.244768367" Sep 9 23:44:17.672613 kubelet[3314]: I0909 23:44:17.672551 3314 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:44:17.724647 kubelet[3314]: I0909 23:44:17.724549 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-64" podStartSLOduration=1.7245307460000001 podStartE2EDuration="1.724530746s" podCreationTimestamp="2025-09-09 23:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:44:17.695277166 +0000 UTC m=+1.270096380" watchObservedRunningTime="2025-09-09 23:44:17.724530746 +0000 UTC m=+1.299349936" Sep 9 23:44:17.724825 kubelet[3314]: I0909 23:44:17.724757 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-64" podStartSLOduration=1.724748295 podStartE2EDuration="1.724748295s" podCreationTimestamp="2025-09-09 23:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:44:17.723780911 +0000 UTC m=+1.298600125" watchObservedRunningTime="2025-09-09 23:44:17.724748295 +0000 UTC m=+1.299567473" Sep 9 23:44:22.378825 kubelet[3314]: I0909 23:44:22.378636 3314 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 23:44:22.380862 containerd[2011]: time="2025-09-09T23:44:22.379872310Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 23:44:22.381344 kubelet[3314]: I0909 23:44:22.380614 3314 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 23:44:22.975523 update_engine[1984]: I20250909 23:44:22.975420 1984 update_attempter.cc:509] Updating boot flags... Sep 9 23:44:23.177171 systemd[1]: Created slice kubepods-besteffort-pod77541c1a_74d8_4457_9539_c36d62eb6720.slice - libcontainer container kubepods-besteffort-pod77541c1a_74d8_4457_9539_c36d62eb6720.slice. Sep 9 23:44:23.217408 kubelet[3314]: I0909 23:44:23.217230 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/77541c1a-74d8-4457-9539-c36d62eb6720-kube-proxy\") pod \"kube-proxy-t7dqv\" (UID: \"77541c1a-74d8-4457-9539-c36d62eb6720\") " pod="kube-system/kube-proxy-t7dqv" Sep 9 23:44:23.217408 kubelet[3314]: I0909 23:44:23.217308 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77541c1a-74d8-4457-9539-c36d62eb6720-lib-modules\") pod \"kube-proxy-t7dqv\" (UID: \"77541c1a-74d8-4457-9539-c36d62eb6720\") " pod="kube-system/kube-proxy-t7dqv" Sep 9 23:44:23.217408 kubelet[3314]: I0909 23:44:23.217352 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77541c1a-74d8-4457-9539-c36d62eb6720-xtables-lock\") pod \"kube-proxy-t7dqv\" (UID: \"77541c1a-74d8-4457-9539-c36d62eb6720\") " pod="kube-system/kube-proxy-t7dqv" Sep 9 23:44:23.217408 kubelet[3314]: I0909 23:44:23.217413 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl2sq\" (UniqueName: \"kubernetes.io/projected/77541c1a-74d8-4457-9539-c36d62eb6720-kube-api-access-cl2sq\") pod \"kube-proxy-t7dqv\" (UID: \"77541c1a-74d8-4457-9539-c36d62eb6720\") " pod="kube-system/kube-proxy-t7dqv" Sep 9 23:44:23.543923 containerd[2011]: time="2025-09-09T23:44:23.542720616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t7dqv,Uid:77541c1a-74d8-4457-9539-c36d62eb6720,Namespace:kube-system,Attempt:0,}" Sep 9 23:44:23.649040 containerd[2011]: time="2025-09-09T23:44:23.648548812Z" level=info msg="connecting to shim eca06438fc5f73ee41cd993b7d58708a3c2672af0f1036c273b50c29a6c000f7" address="unix:///run/containerd/s/b7dc9ad44fa6feb21e19ecfee36225ea379976053c187d564745dbab53ccbdfc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:23.730577 kubelet[3314]: I0909 23:44:23.730170 3314 status_manager.go:890] "Failed to get status for pod" podUID="4a3a0485-ece2-46ad-9b78-d379e20f4a7e" pod="tigera-operator/tigera-operator-755d956888-knvsm" err="pods \"tigera-operator-755d956888-knvsm\" is forbidden: User \"system:node:ip-172-31-18-64\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ip-172-31-18-64' and this object" Sep 9 23:44:23.735108 kubelet[3314]: W0909 23:44:23.731742 3314 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-18-64" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-18-64' and this object Sep 9 23:44:23.735108 kubelet[3314]: E0909 23:44:23.731831 3314 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ip-172-31-18-64\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ip-172-31-18-64' and this object" logger="UnhandledError" Sep 9 23:44:23.790268 systemd[1]: Created slice kubepods-besteffort-pod4a3a0485_ece2_46ad_9b78_d379e20f4a7e.slice - libcontainer container kubepods-besteffort-pod4a3a0485_ece2_46ad_9b78_d379e20f4a7e.slice. Sep 9 23:44:23.824710 systemd[1]: Started cri-containerd-eca06438fc5f73ee41cd993b7d58708a3c2672af0f1036c273b50c29a6c000f7.scope - libcontainer container eca06438fc5f73ee41cd993b7d58708a3c2672af0f1036c273b50c29a6c000f7. Sep 9 23:44:23.827562 kubelet[3314]: I0909 23:44:23.826036 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4a3a0485-ece2-46ad-9b78-d379e20f4a7e-var-lib-calico\") pod \"tigera-operator-755d956888-knvsm\" (UID: \"4a3a0485-ece2-46ad-9b78-d379e20f4a7e\") " pod="tigera-operator/tigera-operator-755d956888-knvsm" Sep 9 23:44:23.827562 kubelet[3314]: I0909 23:44:23.826115 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64kl4\" (UniqueName: \"kubernetes.io/projected/4a3a0485-ece2-46ad-9b78-d379e20f4a7e-kube-api-access-64kl4\") pod \"tigera-operator-755d956888-knvsm\" (UID: \"4a3a0485-ece2-46ad-9b78-d379e20f4a7e\") " pod="tigera-operator/tigera-operator-755d956888-knvsm" Sep 9 23:44:23.928102 containerd[2011]: time="2025-09-09T23:44:23.926749673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t7dqv,Uid:77541c1a-74d8-4457-9539-c36d62eb6720,Namespace:kube-system,Attempt:0,} returns sandbox id \"eca06438fc5f73ee41cd993b7d58708a3c2672af0f1036c273b50c29a6c000f7\"" Sep 9 23:44:23.952644 containerd[2011]: time="2025-09-09T23:44:23.952579670Z" level=info msg="CreateContainer within sandbox \"eca06438fc5f73ee41cd993b7d58708a3c2672af0f1036c273b50c29a6c000f7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 23:44:24.025919 containerd[2011]: time="2025-09-09T23:44:24.025863370Z" level=info msg="Container debb33fdbedb562c61ba20fae69ce77e0e34357af26b911f213284b53e837c2e: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:24.091907 containerd[2011]: time="2025-09-09T23:44:24.091758349Z" level=info msg="CreateContainer within sandbox \"eca06438fc5f73ee41cd993b7d58708a3c2672af0f1036c273b50c29a6c000f7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"debb33fdbedb562c61ba20fae69ce77e0e34357af26b911f213284b53e837c2e\"" Sep 9 23:44:24.095573 containerd[2011]: time="2025-09-09T23:44:24.095086076Z" level=info msg="StartContainer for \"debb33fdbedb562c61ba20fae69ce77e0e34357af26b911f213284b53e837c2e\"" Sep 9 23:44:24.103211 containerd[2011]: time="2025-09-09T23:44:24.102644382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-knvsm,Uid:4a3a0485-ece2-46ad-9b78-d379e20f4a7e,Namespace:tigera-operator,Attempt:0,}" Sep 9 23:44:24.105607 containerd[2011]: time="2025-09-09T23:44:24.105461013Z" level=info msg="connecting to shim debb33fdbedb562c61ba20fae69ce77e0e34357af26b911f213284b53e837c2e" address="unix:///run/containerd/s/b7dc9ad44fa6feb21e19ecfee36225ea379976053c187d564745dbab53ccbdfc" protocol=ttrpc version=3 Sep 9 23:44:24.173961 containerd[2011]: time="2025-09-09T23:44:24.173197485Z" level=info msg="connecting to shim 37a5cdfad3def6ac05dd6dc4d108b8081b7ea17d208cd22d7eeeb0705b252fb1" address="unix:///run/containerd/s/ec9ff4e74654c7bef40dd1decc28af7d457308106e6416bcd633cb466b03367d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:24.242695 systemd[1]: Started cri-containerd-debb33fdbedb562c61ba20fae69ce77e0e34357af26b911f213284b53e837c2e.scope - libcontainer container debb33fdbedb562c61ba20fae69ce77e0e34357af26b911f213284b53e837c2e. Sep 9 23:44:24.257694 systemd[1]: Started cri-containerd-37a5cdfad3def6ac05dd6dc4d108b8081b7ea17d208cd22d7eeeb0705b252fb1.scope - libcontainer container 37a5cdfad3def6ac05dd6dc4d108b8081b7ea17d208cd22d7eeeb0705b252fb1. Sep 9 23:44:24.370926 containerd[2011]: time="2025-09-09T23:44:24.369548593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-knvsm,Uid:4a3a0485-ece2-46ad-9b78-d379e20f4a7e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"37a5cdfad3def6ac05dd6dc4d108b8081b7ea17d208cd22d7eeeb0705b252fb1\"" Sep 9 23:44:24.385176 containerd[2011]: time="2025-09-09T23:44:24.385098332Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 23:44:24.389531 containerd[2011]: time="2025-09-09T23:44:24.389458227Z" level=info msg="StartContainer for \"debb33fdbedb562c61ba20fae69ce77e0e34357af26b911f213284b53e837c2e\" returns successfully" Sep 9 23:44:24.793007 kubelet[3314]: I0909 23:44:24.792915 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t7dqv" podStartSLOduration=1.7928962560000001 podStartE2EDuration="1.792896256s" podCreationTimestamp="2025-09-09 23:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:44:24.792655175 +0000 UTC m=+8.367474377" watchObservedRunningTime="2025-09-09 23:44:24.792896256 +0000 UTC m=+8.367715434" Sep 9 23:44:25.674901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707490835.mount: Deactivated successfully. Sep 9 23:44:26.648419 containerd[2011]: time="2025-09-09T23:44:26.648329850Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:26.650860 containerd[2011]: time="2025-09-09T23:44:26.650807144Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 9 23:44:26.651667 containerd[2011]: time="2025-09-09T23:44:26.651607620Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:26.655186 containerd[2011]: time="2025-09-09T23:44:26.655124129Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:26.656812 containerd[2011]: time="2025-09-09T23:44:26.656756909Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.271589147s" Sep 9 23:44:26.657009 containerd[2011]: time="2025-09-09T23:44:26.656981025Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 9 23:44:26.661533 containerd[2011]: time="2025-09-09T23:44:26.661223430Z" level=info msg="CreateContainer within sandbox \"37a5cdfad3def6ac05dd6dc4d108b8081b7ea17d208cd22d7eeeb0705b252fb1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 23:44:26.677722 containerd[2011]: time="2025-09-09T23:44:26.677652452Z" level=info msg="Container 563a0c728f1c6ccebb5abc44555aecefc1502918f4351512526e2a9726721094: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:26.694324 containerd[2011]: time="2025-09-09T23:44:26.694047966Z" level=info msg="CreateContainer within sandbox \"37a5cdfad3def6ac05dd6dc4d108b8081b7ea17d208cd22d7eeeb0705b252fb1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"563a0c728f1c6ccebb5abc44555aecefc1502918f4351512526e2a9726721094\"" Sep 9 23:44:26.695101 containerd[2011]: time="2025-09-09T23:44:26.695058931Z" level=info msg="StartContainer for \"563a0c728f1c6ccebb5abc44555aecefc1502918f4351512526e2a9726721094\"" Sep 9 23:44:26.698014 containerd[2011]: time="2025-09-09T23:44:26.697834922Z" level=info msg="connecting to shim 563a0c728f1c6ccebb5abc44555aecefc1502918f4351512526e2a9726721094" address="unix:///run/containerd/s/ec9ff4e74654c7bef40dd1decc28af7d457308106e6416bcd633cb466b03367d" protocol=ttrpc version=3 Sep 9 23:44:26.749704 systemd[1]: Started cri-containerd-563a0c728f1c6ccebb5abc44555aecefc1502918f4351512526e2a9726721094.scope - libcontainer container 563a0c728f1c6ccebb5abc44555aecefc1502918f4351512526e2a9726721094. Sep 9 23:44:26.847286 containerd[2011]: time="2025-09-09T23:44:26.847225473Z" level=info msg="StartContainer for \"563a0c728f1c6ccebb5abc44555aecefc1502918f4351512526e2a9726721094\" returns successfully" Sep 9 23:44:33.668505 sudo[2353]: pam_unix(sudo:session): session closed for user root Sep 9 23:44:33.693415 sshd[2352]: Connection closed by 139.178.89.65 port 36898 Sep 9 23:44:33.693601 sshd-session[2349]: pam_unix(sshd:session): session closed for user core Sep 9 23:44:33.701901 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 23:44:33.702869 systemd[1]: session-7.scope: Consumed 11.309s CPU time, 220M memory peak. Sep 9 23:44:33.708416 systemd[1]: sshd@6-172.31.18.64:22-139.178.89.65:36898.service: Deactivated successfully. Sep 9 23:44:33.721680 systemd-logind[1981]: Session 7 logged out. Waiting for processes to exit. Sep 9 23:44:33.727870 systemd-logind[1981]: Removed session 7. Sep 9 23:44:43.755937 kubelet[3314]: I0909 23:44:43.755842 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-knvsm" podStartSLOduration=18.475190956 podStartE2EDuration="20.755790677s" podCreationTimestamp="2025-09-09 23:44:23 +0000 UTC" firstStartedPulling="2025-09-09 23:44:24.377657854 +0000 UTC m=+7.952477032" lastFinishedPulling="2025-09-09 23:44:26.658257575 +0000 UTC m=+10.233076753" observedRunningTime="2025-09-09 23:44:27.832459295 +0000 UTC m=+11.407278497" watchObservedRunningTime="2025-09-09 23:44:43.755790677 +0000 UTC m=+27.330609975" Sep 9 23:44:43.779593 systemd[1]: Created slice kubepods-besteffort-pod17ddd906_fef1_43ea_a227_4cfc83e5a920.slice - libcontainer container kubepods-besteffort-pod17ddd906_fef1_43ea_a227_4cfc83e5a920.slice. Sep 9 23:44:43.872992 kubelet[3314]: I0909 23:44:43.872752 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/17ddd906-fef1-43ea-a227-4cfc83e5a920-typha-certs\") pod \"calico-typha-7bf9dd8f86-jdrpk\" (UID: \"17ddd906-fef1-43ea-a227-4cfc83e5a920\") " pod="calico-system/calico-typha-7bf9dd8f86-jdrpk" Sep 9 23:44:43.872992 kubelet[3314]: I0909 23:44:43.872851 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17ddd906-fef1-43ea-a227-4cfc83e5a920-tigera-ca-bundle\") pod \"calico-typha-7bf9dd8f86-jdrpk\" (UID: \"17ddd906-fef1-43ea-a227-4cfc83e5a920\") " pod="calico-system/calico-typha-7bf9dd8f86-jdrpk" Sep 9 23:44:43.872992 kubelet[3314]: I0909 23:44:43.872897 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9f6b\" (UniqueName: \"kubernetes.io/projected/17ddd906-fef1-43ea-a227-4cfc83e5a920-kube-api-access-g9f6b\") pod \"calico-typha-7bf9dd8f86-jdrpk\" (UID: \"17ddd906-fef1-43ea-a227-4cfc83e5a920\") " pod="calico-system/calico-typha-7bf9dd8f86-jdrpk" Sep 9 23:44:44.093306 containerd[2011]: time="2025-09-09T23:44:44.093052029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bf9dd8f86-jdrpk,Uid:17ddd906-fef1-43ea-a227-4cfc83e5a920,Namespace:calico-system,Attempt:0,}" Sep 9 23:44:44.160925 containerd[2011]: time="2025-09-09T23:44:44.160848831Z" level=info msg="connecting to shim 057cb6d91372faa3fa16e05aff909e042e83c7ce79e022c6df9cd6fa49587c6b" address="unix:///run/containerd/s/94e51e5b14cbe2d690263698bf9f1136548c1cad38b8880088e0b939bfa8bbff" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:44.175695 systemd[1]: Created slice kubepods-besteffort-pod54db801f_f191_4cdb_b348_77f5a3aa291b.slice - libcontainer container kubepods-besteffort-pod54db801f_f191_4cdb_b348_77f5a3aa291b.slice. Sep 9 23:44:44.278061 kubelet[3314]: I0909 23:44:44.278014 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/54db801f-f191-4cdb-b348-77f5a3aa291b-cni-bin-dir\") pod \"calico-node-7dq62\" (UID: \"54db801f-f191-4cdb-b348-77f5a3aa291b\") " pod="calico-system/calico-node-7dq62" Sep 9 23:44:44.278356 kubelet[3314]: I0909 23:44:44.278312 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/54db801f-f191-4cdb-b348-77f5a3aa291b-node-certs\") pod \"calico-node-7dq62\" (UID: \"54db801f-f191-4cdb-b348-77f5a3aa291b\") " pod="calico-system/calico-node-7dq62" Sep 9 23:44:44.278488 kubelet[3314]: I0909 23:44:44.278393 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgb8z\" (UniqueName: \"kubernetes.io/projected/54db801f-f191-4cdb-b348-77f5a3aa291b-kube-api-access-xgb8z\") pod \"calico-node-7dq62\" (UID: \"54db801f-f191-4cdb-b348-77f5a3aa291b\") " pod="calico-system/calico-node-7dq62" Sep 9 23:44:44.278488 kubelet[3314]: I0909 23:44:44.278447 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54db801f-f191-4cdb-b348-77f5a3aa291b-xtables-lock\") pod \"calico-node-7dq62\" (UID: \"54db801f-f191-4cdb-b348-77f5a3aa291b\") " pod="calico-system/calico-node-7dq62" Sep 9 23:44:44.278591 kubelet[3314]: I0909 23:44:44.278491 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/54db801f-f191-4cdb-b348-77f5a3aa291b-flexvol-driver-host\") pod \"calico-node-7dq62\" (UID: \"54db801f-f191-4cdb-b348-77f5a3aa291b\") " pod="calico-system/calico-node-7dq62" Sep 9 23:44:44.278591 kubelet[3314]: I0909 23:44:44.278528 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/54db801f-f191-4cdb-b348-77f5a3aa291b-cni-net-dir\") pod \"calico-node-7dq62\" (UID: \"54db801f-f191-4cdb-b348-77f5a3aa291b\") " pod="calico-system/calico-node-7dq62" Sep 9 23:44:44.278591 kubelet[3314]: I0909 23:44:44.278571 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54db801f-f191-4cdb-b348-77f5a3aa291b-lib-modules\") pod \"calico-node-7dq62\" (UID: \"54db801f-f191-4cdb-b348-77f5a3aa291b\") " pod="calico-system/calico-node-7dq62" Sep 9 23:44:44.278739 kubelet[3314]: I0909 23:44:44.278608 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/54db801f-f191-4cdb-b348-77f5a3aa291b-policysync\") pod \"calico-node-7dq62\" (UID: \"54db801f-f191-4cdb-b348-77f5a3aa291b\") " pod="calico-system/calico-node-7dq62" Sep 9 23:44:44.278739 kubelet[3314]: I0909 23:44:44.278647 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54db801f-f191-4cdb-b348-77f5a3aa291b-tigera-ca-bundle\") pod \"calico-node-7dq62\" (UID: \"54db801f-f191-4cdb-b348-77f5a3aa291b\") " pod="calico-system/calico-node-7dq62" Sep 9 23:44:44.278739 kubelet[3314]: I0909 23:44:44.278697 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/54db801f-f191-4cdb-b348-77f5a3aa291b-var-lib-calico\") pod \"calico-node-7dq62\" (UID: \"54db801f-f191-4cdb-b348-77f5a3aa291b\") " pod="calico-system/calico-node-7dq62" Sep 9 23:44:44.278874 kubelet[3314]: I0909 23:44:44.278736 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/54db801f-f191-4cdb-b348-77f5a3aa291b-var-run-calico\") pod \"calico-node-7dq62\" (UID: \"54db801f-f191-4cdb-b348-77f5a3aa291b\") " pod="calico-system/calico-node-7dq62" Sep 9 23:44:44.278874 kubelet[3314]: I0909 23:44:44.278775 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/54db801f-f191-4cdb-b348-77f5a3aa291b-cni-log-dir\") pod \"calico-node-7dq62\" (UID: \"54db801f-f191-4cdb-b348-77f5a3aa291b\") " pod="calico-system/calico-node-7dq62" Sep 9 23:44:44.284704 systemd[1]: Started cri-containerd-057cb6d91372faa3fa16e05aff909e042e83c7ce79e022c6df9cd6fa49587c6b.scope - libcontainer container 057cb6d91372faa3fa16e05aff909e042e83c7ce79e022c6df9cd6fa49587c6b. Sep 9 23:44:44.391597 kubelet[3314]: E0909 23:44:44.391434 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.391732 kubelet[3314]: W0909 23:44:44.391604 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.391732 kubelet[3314]: E0909 23:44:44.391646 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.395946 kubelet[3314]: E0909 23:44:44.395854 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.395946 kubelet[3314]: W0909 23:44:44.395891 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.395946 kubelet[3314]: E0909 23:44:44.395946 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.440810 kubelet[3314]: E0909 23:44:44.440460 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.440810 kubelet[3314]: W0909 23:44:44.440498 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.440810 kubelet[3314]: E0909 23:44:44.440534 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.489938 containerd[2011]: time="2025-09-09T23:44:44.489853358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7dq62,Uid:54db801f-f191-4cdb-b348-77f5a3aa291b,Namespace:calico-system,Attempt:0,}" Sep 9 23:44:44.511634 kubelet[3314]: E0909 23:44:44.510353 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44bk5" podUID="6e7cc1ef-5e33-408e-bd7c-721420748437" Sep 9 23:44:44.513045 containerd[2011]: time="2025-09-09T23:44:44.512799093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bf9dd8f86-jdrpk,Uid:17ddd906-fef1-43ea-a227-4cfc83e5a920,Namespace:calico-system,Attempt:0,} returns sandbox id \"057cb6d91372faa3fa16e05aff909e042e83c7ce79e022c6df9cd6fa49587c6b\"" Sep 9 23:44:44.519496 containerd[2011]: time="2025-09-09T23:44:44.519350047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 23:44:44.545407 kubelet[3314]: E0909 23:44:44.544869 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.545407 kubelet[3314]: W0909 23:44:44.545344 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.546786 kubelet[3314]: E0909 23:44:44.546527 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.547674 kubelet[3314]: E0909 23:44:44.547533 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.548822 kubelet[3314]: W0909 23:44:44.548623 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.549108 kubelet[3314]: E0909 23:44:44.548745 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.550864 kubelet[3314]: E0909 23:44:44.550814 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.551406 kubelet[3314]: W0909 23:44:44.551061 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.551406 kubelet[3314]: E0909 23:44:44.551103 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.552309 kubelet[3314]: E0909 23:44:44.552138 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.552309 kubelet[3314]: W0909 23:44:44.552201 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.552652 kubelet[3314]: E0909 23:44:44.552228 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.553397 kubelet[3314]: E0909 23:44:44.553215 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.553397 kubelet[3314]: W0909 23:44:44.553270 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.553671 kubelet[3314]: E0909 23:44:44.553298 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.554248 kubelet[3314]: E0909 23:44:44.554140 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.555267 kubelet[3314]: W0909 23:44:44.554414 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.555924 kubelet[3314]: E0909 23:44:44.554482 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.557872 kubelet[3314]: E0909 23:44:44.557495 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.557872 kubelet[3314]: W0909 23:44:44.557533 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.557872 kubelet[3314]: E0909 23:44:44.557564 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.559712 containerd[2011]: time="2025-09-09T23:44:44.559629025Z" level=info msg="connecting to shim 71fda0a271d54346d756cbfa6819f39f29090eade7a409746b2f3444beb806b3" address="unix:///run/containerd/s/8c9c2de84ec6cfaacd08c31a161930fd38c26f0ac1ff3419961ecef86498926a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:44.561718 kubelet[3314]: E0909 23:44:44.561647 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.562336 kubelet[3314]: W0909 23:44:44.562076 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.562336 kubelet[3314]: E0909 23:44:44.562175 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.564730 kubelet[3314]: E0909 23:44:44.564696 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.565328 kubelet[3314]: W0909 23:44:44.565169 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.565328 kubelet[3314]: E0909 23:44:44.565206 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.566992 kubelet[3314]: E0909 23:44:44.566833 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.568186 kubelet[3314]: W0909 23:44:44.567364 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.568186 kubelet[3314]: E0909 23:44:44.567986 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.570025 kubelet[3314]: E0909 23:44:44.569524 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.571580 kubelet[3314]: W0909 23:44:44.571439 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.572403 kubelet[3314]: E0909 23:44:44.571955 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.574960 kubelet[3314]: E0909 23:44:44.573862 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.574960 kubelet[3314]: W0909 23:44:44.573897 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.574960 kubelet[3314]: E0909 23:44:44.573928 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.577229 kubelet[3314]: E0909 23:44:44.577093 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.577229 kubelet[3314]: W0909 23:44:44.577154 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.577694 kubelet[3314]: E0909 23:44:44.577188 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.578614 kubelet[3314]: E0909 23:44:44.578411 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.578614 kubelet[3314]: W0909 23:44:44.578440 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.578614 kubelet[3314]: E0909 23:44:44.578467 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.579283 kubelet[3314]: E0909 23:44:44.579153 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.579283 kubelet[3314]: W0909 23:44:44.579209 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.579283 kubelet[3314]: E0909 23:44:44.579238 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.581339 kubelet[3314]: E0909 23:44:44.581301 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.581869 kubelet[3314]: W0909 23:44:44.581552 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.581869 kubelet[3314]: E0909 23:44:44.581607 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.582571 kubelet[3314]: E0909 23:44:44.582540 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.582865 kubelet[3314]: W0909 23:44:44.582700 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.582865 kubelet[3314]: E0909 23:44:44.582737 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.583419 kubelet[3314]: E0909 23:44:44.583329 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.583720 kubelet[3314]: W0909 23:44:44.583358 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.583720 kubelet[3314]: E0909 23:44:44.583588 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.584349 kubelet[3314]: E0909 23:44:44.584149 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.584349 kubelet[3314]: W0909 23:44:44.584174 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.584349 kubelet[3314]: E0909 23:44:44.584199 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.585003 kubelet[3314]: E0909 23:44:44.584824 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.585003 kubelet[3314]: W0909 23:44:44.584845 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.585003 kubelet[3314]: E0909 23:44:44.584868 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.586872 kubelet[3314]: E0909 23:44:44.586279 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.586872 kubelet[3314]: W0909 23:44:44.586320 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.586872 kubelet[3314]: E0909 23:44:44.586351 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.587109 kubelet[3314]: I0909 23:44:44.586880 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e7cc1ef-5e33-408e-bd7c-721420748437-socket-dir\") pod \"csi-node-driver-44bk5\" (UID: \"6e7cc1ef-5e33-408e-bd7c-721420748437\") " pod="calico-system/csi-node-driver-44bk5" Sep 9 23:44:44.588098 kubelet[3314]: E0909 23:44:44.588044 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.588526 kubelet[3314]: W0909 23:44:44.588120 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.588526 kubelet[3314]: E0909 23:44:44.588171 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.588526 kubelet[3314]: I0909 23:44:44.588241 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6e7cc1ef-5e33-408e-bd7c-721420748437-varrun\") pod \"csi-node-driver-44bk5\" (UID: \"6e7cc1ef-5e33-408e-bd7c-721420748437\") " pod="calico-system/csi-node-driver-44bk5" Sep 9 23:44:44.589072 kubelet[3314]: E0909 23:44:44.589010 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.589072 kubelet[3314]: W0909 23:44:44.589040 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.589513 kubelet[3314]: E0909 23:44:44.589265 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.590117 kubelet[3314]: E0909 23:44:44.590088 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.591162 kubelet[3314]: W0909 23:44:44.590578 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.591162 kubelet[3314]: E0909 23:44:44.590644 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.592577 kubelet[3314]: E0909 23:44:44.592537 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.593118 kubelet[3314]: W0909 23:44:44.592740 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.593118 kubelet[3314]: E0909 23:44:44.592821 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.593118 kubelet[3314]: I0909 23:44:44.592882 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6e7cc1ef-5e33-408e-bd7c-721420748437-kubelet-dir\") pod \"csi-node-driver-44bk5\" (UID: \"6e7cc1ef-5e33-408e-bd7c-721420748437\") " pod="calico-system/csi-node-driver-44bk5" Sep 9 23:44:44.595905 kubelet[3314]: E0909 23:44:44.595638 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.595905 kubelet[3314]: W0909 23:44:44.595674 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.595905 kubelet[3314]: E0909 23:44:44.595744 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.595905 kubelet[3314]: I0909 23:44:44.595798 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e7cc1ef-5e33-408e-bd7c-721420748437-registration-dir\") pod \"csi-node-driver-44bk5\" (UID: \"6e7cc1ef-5e33-408e-bd7c-721420748437\") " pod="calico-system/csi-node-driver-44bk5" Sep 9 23:44:44.597280 kubelet[3314]: E0909 23:44:44.596643 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.597280 kubelet[3314]: W0909 23:44:44.596675 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.597280 kubelet[3314]: E0909 23:44:44.596984 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.598230 kubelet[3314]: E0909 23:44:44.598069 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.598230 kubelet[3314]: W0909 23:44:44.598101 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.599159 kubelet[3314]: E0909 23:44:44.598740 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.599965 kubelet[3314]: E0909 23:44:44.599559 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.599965 kubelet[3314]: W0909 23:44:44.599589 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.600587 kubelet[3314]: E0909 23:44:44.600342 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.600587 kubelet[3314]: I0909 23:44:44.600434 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgdvt\" (UniqueName: \"kubernetes.io/projected/6e7cc1ef-5e33-408e-bd7c-721420748437-kube-api-access-mgdvt\") pod \"csi-node-driver-44bk5\" (UID: \"6e7cc1ef-5e33-408e-bd7c-721420748437\") " pod="calico-system/csi-node-driver-44bk5" Sep 9 23:44:44.603881 kubelet[3314]: E0909 23:44:44.603558 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.603881 kubelet[3314]: W0909 23:44:44.603593 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.603881 kubelet[3314]: E0909 23:44:44.603837 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.605089 kubelet[3314]: E0909 23:44:44.604456 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.605089 kubelet[3314]: W0909 23:44:44.604485 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.605089 kubelet[3314]: E0909 23:44:44.604530 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.606286 kubelet[3314]: E0909 23:44:44.605719 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.606286 kubelet[3314]: W0909 23:44:44.605751 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.606286 kubelet[3314]: E0909 23:44:44.605794 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.607447 kubelet[3314]: E0909 23:44:44.607073 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.607447 kubelet[3314]: W0909 23:44:44.607102 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.607447 kubelet[3314]: E0909 23:44:44.607128 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.608741 kubelet[3314]: E0909 23:44:44.608654 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.608741 kubelet[3314]: W0909 23:44:44.608692 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.608741 kubelet[3314]: E0909 23:44:44.608727 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.609455 kubelet[3314]: E0909 23:44:44.609130 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.609455 kubelet[3314]: W0909 23:44:44.609162 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.609455 kubelet[3314]: E0909 23:44:44.609186 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.648711 systemd[1]: Started cri-containerd-71fda0a271d54346d756cbfa6819f39f29090eade7a409746b2f3444beb806b3.scope - libcontainer container 71fda0a271d54346d756cbfa6819f39f29090eade7a409746b2f3444beb806b3. Sep 9 23:44:44.709321 kubelet[3314]: E0909 23:44:44.709224 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.709321 kubelet[3314]: W0909 23:44:44.709285 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.709685 kubelet[3314]: E0909 23:44:44.709318 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.710922 kubelet[3314]: E0909 23:44:44.710874 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.710922 kubelet[3314]: W0909 23:44:44.710912 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.711099 kubelet[3314]: E0909 23:44:44.710971 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.711511 kubelet[3314]: E0909 23:44:44.711468 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.711840 kubelet[3314]: W0909 23:44:44.711521 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.711840 kubelet[3314]: E0909 23:44:44.711579 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.712929 kubelet[3314]: E0909 23:44:44.712879 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.712929 kubelet[3314]: W0909 23:44:44.712939 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.713504 kubelet[3314]: E0909 23:44:44.713139 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.714079 kubelet[3314]: E0909 23:44:44.714023 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.714079 kubelet[3314]: W0909 23:44:44.714061 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.714457 kubelet[3314]: E0909 23:44:44.714427 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.715864 kubelet[3314]: E0909 23:44:44.715689 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.715864 kubelet[3314]: W0909 23:44:44.715748 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.716445 kubelet[3314]: E0909 23:44:44.716130 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.716445 kubelet[3314]: W0909 23:44:44.716162 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.716445 kubelet[3314]: E0909 23:44:44.716222 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.716897 kubelet[3314]: E0909 23:44:44.716477 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.716897 kubelet[3314]: E0909 23:44:44.716883 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.717025 kubelet[3314]: W0909 23:44:44.716910 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.717517 kubelet[3314]: E0909 23:44:44.717468 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.718056 kubelet[3314]: E0909 23:44:44.718012 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.718056 kubelet[3314]: W0909 23:44:44.718047 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.718328 kubelet[3314]: E0909 23:44:44.718107 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.719642 kubelet[3314]: E0909 23:44:44.719589 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.719642 kubelet[3314]: W0909 23:44:44.719628 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.720042 kubelet[3314]: E0909 23:44:44.719691 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.720042 kubelet[3314]: E0909 23:44:44.719981 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.720042 kubelet[3314]: W0909 23:44:44.719997 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.720453 kubelet[3314]: E0909 23:44:44.720180 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.720992 kubelet[3314]: E0909 23:44:44.720940 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.721139 kubelet[3314]: W0909 23:44:44.721006 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.722689 kubelet[3314]: E0909 23:44:44.722603 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.722886 kubelet[3314]: E0909 23:44:44.722766 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.722886 kubelet[3314]: W0909 23:44:44.722787 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.724009 kubelet[3314]: E0909 23:44:44.723495 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.724009 kubelet[3314]: E0909 23:44:44.723937 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.724009 kubelet[3314]: W0909 23:44:44.723961 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.724797 kubelet[3314]: E0909 23:44:44.724353 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.725167 kubelet[3314]: E0909 23:44:44.724915 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.725167 kubelet[3314]: W0909 23:44:44.724974 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.725167 kubelet[3314]: E0909 23:44:44.725030 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.727119 kubelet[3314]: E0909 23:44:44.725308 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.727119 kubelet[3314]: W0909 23:44:44.725326 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.727119 kubelet[3314]: E0909 23:44:44.725563 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.727119 kubelet[3314]: E0909 23:44:44.726614 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.727119 kubelet[3314]: W0909 23:44:44.726646 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.727119 kubelet[3314]: E0909 23:44:44.726796 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.729690 kubelet[3314]: E0909 23:44:44.729630 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.729690 kubelet[3314]: W0909 23:44:44.729668 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.729690 kubelet[3314]: E0909 23:44:44.729736 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.733438 kubelet[3314]: E0909 23:44:44.733139 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.733438 kubelet[3314]: W0909 23:44:44.733177 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.736866 kubelet[3314]: E0909 23:44:44.736813 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.736866 kubelet[3314]: W0909 23:44:44.736857 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.738863 kubelet[3314]: E0909 23:44:44.738799 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.738863 kubelet[3314]: W0909 23:44:44.738843 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.740495 kubelet[3314]: E0909 23:44:44.740432 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.740495 kubelet[3314]: E0909 23:44:44.740478 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.744979 kubelet[3314]: E0909 23:44:44.744919 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.744979 kubelet[3314]: W0909 23:44:44.744963 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.745191 kubelet[3314]: E0909 23:44:44.744998 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.747597 kubelet[3314]: E0909 23:44:44.747535 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.747727 kubelet[3314]: W0909 23:44:44.747613 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.747727 kubelet[3314]: E0909 23:44:44.747646 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.749012 kubelet[3314]: E0909 23:44:44.748732 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.749012 kubelet[3314]: E0909 23:44:44.748921 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.749012 kubelet[3314]: W0909 23:44:44.748940 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.749012 kubelet[3314]: E0909 23:44:44.748981 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.751830 kubelet[3314]: E0909 23:44:44.751754 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.751830 kubelet[3314]: W0909 23:44:44.751817 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.752024 kubelet[3314]: E0909 23:44:44.751852 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:44.788522 containerd[2011]: time="2025-09-09T23:44:44.787466372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7dq62,Uid:54db801f-f191-4cdb-b348-77f5a3aa291b,Namespace:calico-system,Attempt:0,} returns sandbox id \"71fda0a271d54346d756cbfa6819f39f29090eade7a409746b2f3444beb806b3\"" Sep 9 23:44:44.801567 kubelet[3314]: E0909 23:44:44.801523 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:44.801567 kubelet[3314]: W0909 23:44:44.801557 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:44.802128 kubelet[3314]: E0909 23:44:44.801586 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:45.795453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2226923825.mount: Deactivated successfully. Sep 9 23:44:46.672244 kubelet[3314]: E0909 23:44:46.671457 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44bk5" podUID="6e7cc1ef-5e33-408e-bd7c-721420748437" Sep 9 23:44:47.348121 containerd[2011]: time="2025-09-09T23:44:47.348051935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:47.349623 containerd[2011]: time="2025-09-09T23:44:47.349355810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 9 23:44:47.350660 containerd[2011]: time="2025-09-09T23:44:47.350606439Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:47.354121 containerd[2011]: time="2025-09-09T23:44:47.354072355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:47.356147 containerd[2011]: time="2025-09-09T23:44:47.355305588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.835803245s" Sep 9 23:44:47.356147 containerd[2011]: time="2025-09-09T23:44:47.355361512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 9 23:44:47.359250 containerd[2011]: time="2025-09-09T23:44:47.358907940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 23:44:47.384577 containerd[2011]: time="2025-09-09T23:44:47.384529861Z" level=info msg="CreateContainer within sandbox \"057cb6d91372faa3fa16e05aff909e042e83c7ce79e022c6df9cd6fa49587c6b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 23:44:47.398835 containerd[2011]: time="2025-09-09T23:44:47.398769805Z" level=info msg="Container 9f75135352d9af1dc4efbdc6872f514fc8491622f01f734c61f10a10dc152797: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:47.407285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1064977765.mount: Deactivated successfully. Sep 9 23:44:47.419289 containerd[2011]: time="2025-09-09T23:44:47.419209660Z" level=info msg="CreateContainer within sandbox \"057cb6d91372faa3fa16e05aff909e042e83c7ce79e022c6df9cd6fa49587c6b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9f75135352d9af1dc4efbdc6872f514fc8491622f01f734c61f10a10dc152797\"" Sep 9 23:44:47.421267 containerd[2011]: time="2025-09-09T23:44:47.421167959Z" level=info msg="StartContainer for \"9f75135352d9af1dc4efbdc6872f514fc8491622f01f734c61f10a10dc152797\"" Sep 9 23:44:47.424327 containerd[2011]: time="2025-09-09T23:44:47.424276589Z" level=info msg="connecting to shim 9f75135352d9af1dc4efbdc6872f514fc8491622f01f734c61f10a10dc152797" address="unix:///run/containerd/s/94e51e5b14cbe2d690263698bf9f1136548c1cad38b8880088e0b939bfa8bbff" protocol=ttrpc version=3 Sep 9 23:44:47.469704 systemd[1]: Started cri-containerd-9f75135352d9af1dc4efbdc6872f514fc8491622f01f734c61f10a10dc152797.scope - libcontainer container 9f75135352d9af1dc4efbdc6872f514fc8491622f01f734c61f10a10dc152797. Sep 9 23:44:47.634011 containerd[2011]: time="2025-09-09T23:44:47.633365702Z" level=info msg="StartContainer for \"9f75135352d9af1dc4efbdc6872f514fc8491622f01f734c61f10a10dc152797\" returns successfully" Sep 9 23:44:47.916267 kubelet[3314]: E0909 23:44:47.916142 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.916861 kubelet[3314]: W0909 23:44:47.916824 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.917019 kubelet[3314]: E0909 23:44:47.916993 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.918552 kubelet[3314]: E0909 23:44:47.917991 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.918552 kubelet[3314]: W0909 23:44:47.918406 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.918552 kubelet[3314]: E0909 23:44:47.918445 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.920721 kubelet[3314]: E0909 23:44:47.920541 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.920721 kubelet[3314]: W0909 23:44:47.920582 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.920721 kubelet[3314]: E0909 23:44:47.920613 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.921869 kubelet[3314]: E0909 23:44:47.921331 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.921869 kubelet[3314]: W0909 23:44:47.921412 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.921869 kubelet[3314]: E0909 23:44:47.921442 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.926427 kubelet[3314]: E0909 23:44:47.925221 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.926427 kubelet[3314]: W0909 23:44:47.925260 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.926427 kubelet[3314]: E0909 23:44:47.925291 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.927667 kubelet[3314]: E0909 23:44:47.927633 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.927930 kubelet[3314]: W0909 23:44:47.927859 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.928456 kubelet[3314]: E0909 23:44:47.928025 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.929312 kubelet[3314]: E0909 23:44:47.929272 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.929776 kubelet[3314]: W0909 23:44:47.929501 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.929776 kubelet[3314]: E0909 23:44:47.929538 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.930605 kubelet[3314]: E0909 23:44:47.930519 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.931494 kubelet[3314]: W0909 23:44:47.931273 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.931494 kubelet[3314]: E0909 23:44:47.931323 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.932173 kubelet[3314]: E0909 23:44:47.931972 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.932173 kubelet[3314]: W0909 23:44:47.931999 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.932173 kubelet[3314]: E0909 23:44:47.932026 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.933582 kubelet[3314]: E0909 23:44:47.933266 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.933582 kubelet[3314]: W0909 23:44:47.933368 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.933582 kubelet[3314]: E0909 23:44:47.933434 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.935792 kubelet[3314]: E0909 23:44:47.935591 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.935792 kubelet[3314]: W0909 23:44:47.935629 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.935792 kubelet[3314]: E0909 23:44:47.935659 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.936424 kubelet[3314]: E0909 23:44:47.936395 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.936691 kubelet[3314]: W0909 23:44:47.936544 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.936691 kubelet[3314]: E0909 23:44:47.936579 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.938034 kubelet[3314]: E0909 23:44:47.937964 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.938688 kubelet[3314]: W0909 23:44:47.938342 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.939207 kubelet[3314]: E0909 23:44:47.939131 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.940580 kubelet[3314]: E0909 23:44:47.940259 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.940580 kubelet[3314]: W0909 23:44:47.940436 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.940580 kubelet[3314]: E0909 23:44:47.940469 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.941803 kubelet[3314]: E0909 23:44:47.941644 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.941803 kubelet[3314]: W0909 23:44:47.941678 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.942438 kubelet[3314]: E0909 23:44:47.942201 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.951806 kubelet[3314]: E0909 23:44:47.951763 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.952112 kubelet[3314]: W0909 23:44:47.951913 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.952112 kubelet[3314]: E0909 23:44:47.951949 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.953685 kubelet[3314]: E0909 23:44:47.953617 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.954488 kubelet[3314]: W0909 23:44:47.953654 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.954693 kubelet[3314]: E0909 23:44:47.954618 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.956294 kubelet[3314]: E0909 23:44:47.956147 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.957208 kubelet[3314]: W0909 23:44:47.956591 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.957620 kubelet[3314]: E0909 23:44:47.957456 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.958877 kubelet[3314]: E0909 23:44:47.958841 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.959304 kubelet[3314]: W0909 23:44:47.959272 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.959721 kubelet[3314]: E0909 23:44:47.959529 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.960265 kubelet[3314]: E0909 23:44:47.960210 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.961466 kubelet[3314]: W0909 23:44:47.961426 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.962052 kubelet[3314]: E0909 23:44:47.962025 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.962390 kubelet[3314]: W0909 23:44:47.962204 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.962541 kubelet[3314]: E0909 23:44:47.962506 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.962734 kubelet[3314]: E0909 23:44:47.962558 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.962734 kubelet[3314]: E0909 23:44:47.962679 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.962734 kubelet[3314]: W0909 23:44:47.962697 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.963139 kubelet[3314]: E0909 23:44:47.962940 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.963698 kubelet[3314]: E0909 23:44:47.963644 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.964434 kubelet[3314]: W0909 23:44:47.963875 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.964434 kubelet[3314]: E0909 23:44:47.964001 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.964937 kubelet[3314]: E0909 23:44:47.964907 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.965157 kubelet[3314]: W0909 23:44:47.965130 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.965435 kubelet[3314]: E0909 23:44:47.965305 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.966314 kubelet[3314]: E0909 23:44:47.966155 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.966314 kubelet[3314]: W0909 23:44:47.966206 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.967275 kubelet[3314]: E0909 23:44:47.966972 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.967787 kubelet[3314]: E0909 23:44:47.967745 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.968226 kubelet[3314]: W0909 23:44:47.968020 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.968485 kubelet[3314]: E0909 23:44:47.968357 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.969288 kubelet[3314]: E0909 23:44:47.969213 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.969288 kubelet[3314]: W0909 23:44:47.969247 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.970010 kubelet[3314]: E0909 23:44:47.969780 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.971302 kubelet[3314]: E0909 23:44:47.971005 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.971302 kubelet[3314]: W0909 23:44:47.971038 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.971564 kubelet[3314]: E0909 23:44:47.971535 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.971970 kubelet[3314]: E0909 23:44:47.971944 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.972751 kubelet[3314]: W0909 23:44:47.972409 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.972751 kubelet[3314]: E0909 23:44:47.972500 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.974020 kubelet[3314]: E0909 23:44:47.973907 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.974614 kubelet[3314]: W0909 23:44:47.974237 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.974980 kubelet[3314]: E0909 23:44:47.974869 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.975267 kubelet[3314]: E0909 23:44:47.975202 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.975267 kubelet[3314]: W0909 23:44:47.975236 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.975267 kubelet[3314]: E0909 23:44:47.975263 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.976846 kubelet[3314]: E0909 23:44:47.976802 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.976846 kubelet[3314]: W0909 23:44:47.976839 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.976981 kubelet[3314]: E0909 23:44:47.976888 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:47.977998 kubelet[3314]: E0909 23:44:47.977957 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:47.977998 kubelet[3314]: W0909 23:44:47.977993 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:47.978206 kubelet[3314]: E0909 23:44:47.978021 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.670737 kubelet[3314]: E0909 23:44:48.670673 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44bk5" podUID="6e7cc1ef-5e33-408e-bd7c-721420748437" Sep 9 23:44:48.770438 containerd[2011]: time="2025-09-09T23:44:48.769626505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:48.771556 containerd[2011]: time="2025-09-09T23:44:48.771512815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 9 23:44:48.776782 containerd[2011]: time="2025-09-09T23:44:48.776704678Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:48.783704 containerd[2011]: time="2025-09-09T23:44:48.783637663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:48.785895 containerd[2011]: time="2025-09-09T23:44:48.785660458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.426022781s" Sep 9 23:44:48.785895 containerd[2011]: time="2025-09-09T23:44:48.785723057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 9 23:44:48.791384 containerd[2011]: time="2025-09-09T23:44:48.791316725Z" level=info msg="CreateContainer within sandbox \"71fda0a271d54346d756cbfa6819f39f29090eade7a409746b2f3444beb806b3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 23:44:48.811827 containerd[2011]: time="2025-09-09T23:44:48.811760938Z" level=info msg="Container 9b3221f530b95b78f6cf53d6b76e9eabb01305df44f34d892d632a4a00e92b2f: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:48.819297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3029788300.mount: Deactivated successfully. Sep 9 23:44:48.836821 containerd[2011]: time="2025-09-09T23:44:48.836737296Z" level=info msg="CreateContainer within sandbox \"71fda0a271d54346d756cbfa6819f39f29090eade7a409746b2f3444beb806b3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9b3221f530b95b78f6cf53d6b76e9eabb01305df44f34d892d632a4a00e92b2f\"" Sep 9 23:44:48.837731 containerd[2011]: time="2025-09-09T23:44:48.837675517Z" level=info msg="StartContainer for \"9b3221f530b95b78f6cf53d6b76e9eabb01305df44f34d892d632a4a00e92b2f\"" Sep 9 23:44:48.841018 containerd[2011]: time="2025-09-09T23:44:48.840910581Z" level=info msg="connecting to shim 9b3221f530b95b78f6cf53d6b76e9eabb01305df44f34d892d632a4a00e92b2f" address="unix:///run/containerd/s/8c9c2de84ec6cfaacd08c31a161930fd38c26f0ac1ff3419961ecef86498926a" protocol=ttrpc version=3 Sep 9 23:44:48.881692 systemd[1]: Started cri-containerd-9b3221f530b95b78f6cf53d6b76e9eabb01305df44f34d892d632a4a00e92b2f.scope - libcontainer container 9b3221f530b95b78f6cf53d6b76e9eabb01305df44f34d892d632a4a00e92b2f. Sep 9 23:44:48.904515 kubelet[3314]: I0909 23:44:48.903666 3314 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:44:48.950418 kubelet[3314]: E0909 23:44:48.950257 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.951492 kubelet[3314]: W0909 23:44:48.951145 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.951492 kubelet[3314]: E0909 23:44:48.951217 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.952532 kubelet[3314]: E0909 23:44:48.952146 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.952532 kubelet[3314]: W0909 23:44:48.952175 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.952532 kubelet[3314]: E0909 23:44:48.952199 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.953622 kubelet[3314]: E0909 23:44:48.953550 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.953879 kubelet[3314]: W0909 23:44:48.953684 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.953879 kubelet[3314]: E0909 23:44:48.953714 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.954555 kubelet[3314]: E0909 23:44:48.954433 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.954555 kubelet[3314]: W0909 23:44:48.954496 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.955021 kubelet[3314]: E0909 23:44:48.954519 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.955532 kubelet[3314]: E0909 23:44:48.955506 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.955943 kubelet[3314]: W0909 23:44:48.955868 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.955943 kubelet[3314]: E0909 23:44:48.955909 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.956787 kubelet[3314]: E0909 23:44:48.956721 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.956787 kubelet[3314]: W0909 23:44:48.956750 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.957158 kubelet[3314]: E0909 23:44:48.956997 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.957515 kubelet[3314]: E0909 23:44:48.957477 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.957706 kubelet[3314]: W0909 23:44:48.957630 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.957706 kubelet[3314]: E0909 23:44:48.957659 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.958267 kubelet[3314]: E0909 23:44:48.958153 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.958267 kubelet[3314]: W0909 23:44:48.958196 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.958267 kubelet[3314]: E0909 23:44:48.958217 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.959168 kubelet[3314]: E0909 23:44:48.959113 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.959455 kubelet[3314]: W0909 23:44:48.959142 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.959455 kubelet[3314]: E0909 23:44:48.959275 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.960744 kubelet[3314]: E0909 23:44:48.960653 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.960744 kubelet[3314]: W0909 23:44:48.960686 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.961282 kubelet[3314]: E0909 23:44:48.960720 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.962852 kubelet[3314]: E0909 23:44:48.962555 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.962852 kubelet[3314]: W0909 23:44:48.962600 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.962852 kubelet[3314]: E0909 23:44:48.962633 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.963124 kubelet[3314]: E0909 23:44:48.963074 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.963124 kubelet[3314]: W0909 23:44:48.963096 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.963124 kubelet[3314]: E0909 23:44:48.963120 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.963558 kubelet[3314]: E0909 23:44:48.963499 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.963558 kubelet[3314]: W0909 23:44:48.963532 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.963558 kubelet[3314]: E0909 23:44:48.963556 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.964562 kubelet[3314]: E0909 23:44:48.964015 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.964562 kubelet[3314]: W0909 23:44:48.964038 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.964562 kubelet[3314]: E0909 23:44:48.964062 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.964562 kubelet[3314]: E0909 23:44:48.964369 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.964562 kubelet[3314]: W0909 23:44:48.964443 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.964562 kubelet[3314]: E0909 23:44:48.964466 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.964969 kubelet[3314]: E0909 23:44:48.964914 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.964969 kubelet[3314]: W0909 23:44:48.964932 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.964969 kubelet[3314]: E0909 23:44:48.964955 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.965474 kubelet[3314]: E0909 23:44:48.965427 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.965474 kubelet[3314]: W0909 23:44:48.965464 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.965656 kubelet[3314]: E0909 23:44:48.965516 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.966130 kubelet[3314]: E0909 23:44:48.966062 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.966130 kubelet[3314]: W0909 23:44:48.966110 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.966130 kubelet[3314]: E0909 23:44:48.966158 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.966130 kubelet[3314]: E0909 23:44:48.966709 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.966130 kubelet[3314]: W0909 23:44:48.966732 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.966130 kubelet[3314]: E0909 23:44:48.966769 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.966130 kubelet[3314]: E0909 23:44:48.968178 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.966130 kubelet[3314]: W0909 23:44:48.968207 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.966130 kubelet[3314]: E0909 23:44:48.968260 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.969238 kubelet[3314]: E0909 23:44:48.969201 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.969238 kubelet[3314]: W0909 23:44:48.969237 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.969601 kubelet[3314]: E0909 23:44:48.969390 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.969861 kubelet[3314]: E0909 23:44:48.969646 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.969861 kubelet[3314]: W0909 23:44:48.969669 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.969861 kubelet[3314]: E0909 23:44:48.969733 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.970795 kubelet[3314]: E0909 23:44:48.970756 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.970795 kubelet[3314]: W0909 23:44:48.970792 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.971142 kubelet[3314]: E0909 23:44:48.970986 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.971576 kubelet[3314]: E0909 23:44:48.971546 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.971774 kubelet[3314]: W0909 23:44:48.971576 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.971774 kubelet[3314]: E0909 23:44:48.971704 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.972170 kubelet[3314]: E0909 23:44:48.972143 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.972325 kubelet[3314]: W0909 23:44:48.972169 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.972325 kubelet[3314]: E0909 23:44:48.972280 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.972813 kubelet[3314]: E0909 23:44:48.972785 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.972983 kubelet[3314]: W0909 23:44:48.972813 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.972983 kubelet[3314]: E0909 23:44:48.972923 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.973328 kubelet[3314]: E0909 23:44:48.973291 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.973328 kubelet[3314]: W0909 23:44:48.973309 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.973328 kubelet[3314]: E0909 23:44:48.973443 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.974021 kubelet[3314]: E0909 23:44:48.973987 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.974101 kubelet[3314]: W0909 23:44:48.974021 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.974671 kubelet[3314]: E0909 23:44:48.974179 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.974935 kubelet[3314]: E0909 23:44:48.974889 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.974935 kubelet[3314]: W0909 23:44:48.974928 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.975468 kubelet[3314]: E0909 23:44:48.975432 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.975838 kubelet[3314]: E0909 23:44:48.975796 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.975838 kubelet[3314]: W0909 23:44:48.975830 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.975999 kubelet[3314]: E0909 23:44:48.975857 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.976846 kubelet[3314]: E0909 23:44:48.976808 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.976846 kubelet[3314]: W0909 23:44:48.976843 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.977178 kubelet[3314]: E0909 23:44:48.976886 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.977644 kubelet[3314]: E0909 23:44:48.977612 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.977759 kubelet[3314]: W0909 23:44:48.977643 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.977759 kubelet[3314]: E0909 23:44:48.977682 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.978215 kubelet[3314]: E0909 23:44:48.978187 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:48.978274 kubelet[3314]: W0909 23:44:48.978214 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:48.978274 kubelet[3314]: E0909 23:44:48.978237 3314 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:48.995821 containerd[2011]: time="2025-09-09T23:44:48.995730426Z" level=info msg="StartContainer for \"9b3221f530b95b78f6cf53d6b76e9eabb01305df44f34d892d632a4a00e92b2f\" returns successfully" Sep 9 23:44:49.021771 systemd[1]: cri-containerd-9b3221f530b95b78f6cf53d6b76e9eabb01305df44f34d892d632a4a00e92b2f.scope: Deactivated successfully. Sep 9 23:44:49.032309 containerd[2011]: time="2025-09-09T23:44:49.032129268Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b3221f530b95b78f6cf53d6b76e9eabb01305df44f34d892d632a4a00e92b2f\" id:\"9b3221f530b95b78f6cf53d6b76e9eabb01305df44f34d892d632a4a00e92b2f\" pid:4167 exited_at:{seconds:1757461489 nanos:31277190}" Sep 9 23:44:49.032309 containerd[2011]: time="2025-09-09T23:44:49.032221366Z" level=info msg="received exit event container_id:\"9b3221f530b95b78f6cf53d6b76e9eabb01305df44f34d892d632a4a00e92b2f\" id:\"9b3221f530b95b78f6cf53d6b76e9eabb01305df44f34d892d632a4a00e92b2f\" pid:4167 exited_at:{seconds:1757461489 nanos:31277190}" Sep 9 23:44:49.073716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b3221f530b95b78f6cf53d6b76e9eabb01305df44f34d892d632a4a00e92b2f-rootfs.mount: Deactivated successfully. Sep 9 23:44:49.914407 containerd[2011]: time="2025-09-09T23:44:49.914161542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 23:44:49.950538 kubelet[3314]: I0909 23:44:49.950090 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bf9dd8f86-jdrpk" podStartSLOduration=4.1111408130000004 podStartE2EDuration="6.950067719s" podCreationTimestamp="2025-09-09 23:44:43 +0000 UTC" firstStartedPulling="2025-09-09 23:44:44.518295284 +0000 UTC m=+28.093114474" lastFinishedPulling="2025-09-09 23:44:47.357222202 +0000 UTC m=+30.932041380" observedRunningTime="2025-09-09 23:44:47.927218425 +0000 UTC m=+31.502037639" watchObservedRunningTime="2025-09-09 23:44:49.950067719 +0000 UTC m=+33.524886885" Sep 9 23:44:50.670342 kubelet[3314]: E0909 23:44:50.670260 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44bk5" podUID="6e7cc1ef-5e33-408e-bd7c-721420748437" Sep 9 23:44:52.670171 kubelet[3314]: E0909 23:44:52.670038 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44bk5" podUID="6e7cc1ef-5e33-408e-bd7c-721420748437" Sep 9 23:44:52.976939 containerd[2011]: time="2025-09-09T23:44:52.976821424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:52.979155 containerd[2011]: time="2025-09-09T23:44:52.978787179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 9 23:44:52.981245 containerd[2011]: time="2025-09-09T23:44:52.981186314Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:52.986112 containerd[2011]: time="2025-09-09T23:44:52.986063127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:52.987340 containerd[2011]: time="2025-09-09T23:44:52.987281988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.07305003s" Sep 9 23:44:52.987763 containerd[2011]: time="2025-09-09T23:44:52.987337203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 9 23:44:52.993057 containerd[2011]: time="2025-09-09T23:44:52.992997636Z" level=info msg="CreateContainer within sandbox \"71fda0a271d54346d756cbfa6819f39f29090eade7a409746b2f3444beb806b3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 23:44:53.015405 containerd[2011]: time="2025-09-09T23:44:53.013648989Z" level=info msg="Container 914136ce0fd466be2096fc9fe226862c3eef4017286a38537a9eba8f00f933d6: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:53.023078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1922222699.mount: Deactivated successfully. Sep 9 23:44:53.040260 containerd[2011]: time="2025-09-09T23:44:53.040187712Z" level=info msg="CreateContainer within sandbox \"71fda0a271d54346d756cbfa6819f39f29090eade7a409746b2f3444beb806b3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"914136ce0fd466be2096fc9fe226862c3eef4017286a38537a9eba8f00f933d6\"" Sep 9 23:44:53.041721 containerd[2011]: time="2025-09-09T23:44:53.041660944Z" level=info msg="StartContainer for \"914136ce0fd466be2096fc9fe226862c3eef4017286a38537a9eba8f00f933d6\"" Sep 9 23:44:53.048303 containerd[2011]: time="2025-09-09T23:44:53.048085079Z" level=info msg="connecting to shim 914136ce0fd466be2096fc9fe226862c3eef4017286a38537a9eba8f00f933d6" address="unix:///run/containerd/s/8c9c2de84ec6cfaacd08c31a161930fd38c26f0ac1ff3419961ecef86498926a" protocol=ttrpc version=3 Sep 9 23:44:53.089678 systemd[1]: Started cri-containerd-914136ce0fd466be2096fc9fe226862c3eef4017286a38537a9eba8f00f933d6.scope - libcontainer container 914136ce0fd466be2096fc9fe226862c3eef4017286a38537a9eba8f00f933d6. Sep 9 23:44:53.172429 containerd[2011]: time="2025-09-09T23:44:53.172340516Z" level=info msg="StartContainer for \"914136ce0fd466be2096fc9fe226862c3eef4017286a38537a9eba8f00f933d6\" returns successfully" Sep 9 23:44:54.194120 containerd[2011]: time="2025-09-09T23:44:54.194045156Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:44:54.198177 systemd[1]: cri-containerd-914136ce0fd466be2096fc9fe226862c3eef4017286a38537a9eba8f00f933d6.scope: Deactivated successfully. Sep 9 23:44:54.199764 systemd[1]: cri-containerd-914136ce0fd466be2096fc9fe226862c3eef4017286a38537a9eba8f00f933d6.scope: Consumed 929ms CPU time, 185.2M memory peak, 165.8M written to disk. Sep 9 23:44:54.205361 containerd[2011]: time="2025-09-09T23:44:54.205175019Z" level=info msg="TaskExit event in podsandbox handler container_id:\"914136ce0fd466be2096fc9fe226862c3eef4017286a38537a9eba8f00f933d6\" id:\"914136ce0fd466be2096fc9fe226862c3eef4017286a38537a9eba8f00f933d6\" pid:4256 exited_at:{seconds:1757461494 nanos:204664439}" Sep 9 23:44:54.205361 containerd[2011]: time="2025-09-09T23:44:54.205315621Z" level=info msg="received exit event container_id:\"914136ce0fd466be2096fc9fe226862c3eef4017286a38537a9eba8f00f933d6\" id:\"914136ce0fd466be2096fc9fe226862c3eef4017286a38537a9eba8f00f933d6\" pid:4256 exited_at:{seconds:1757461494 nanos:204664439}" Sep 9 23:44:54.231804 kubelet[3314]: I0909 23:44:54.231727 3314 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 23:44:54.268066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-914136ce0fd466be2096fc9fe226862c3eef4017286a38537a9eba8f00f933d6-rootfs.mount: Deactivated successfully. Sep 9 23:44:54.361129 systemd[1]: Created slice kubepods-besteffort-pod61f33671_c702_4cca_aaa1_36fa26aa921f.slice - libcontainer container kubepods-besteffort-pod61f33671_c702_4cca_aaa1_36fa26aa921f.slice. Sep 9 23:44:54.387036 systemd[1]: Created slice kubepods-burstable-pod5bf39971_e401_40db_bcc6_2f920685cab2.slice - libcontainer container kubepods-burstable-pod5bf39971_e401_40db_bcc6_2f920685cab2.slice. Sep 9 23:44:54.414425 kubelet[3314]: I0909 23:44:54.413710 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89cd5784-8525-40b2-b9f2-e26328eb1dea-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-p985n\" (UID: \"89cd5784-8525-40b2-b9f2-e26328eb1dea\") " pod="calico-system/goldmane-54d579b49d-p985n" Sep 9 23:44:54.414425 kubelet[3314]: I0909 23:44:54.413781 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9lmc\" (UniqueName: \"kubernetes.io/projected/c8c73f8a-ee9c-4c93-8ca5-690098513149-kube-api-access-n9lmc\") pod \"coredns-668d6bf9bc-zhnjv\" (UID: \"c8c73f8a-ee9c-4c93-8ca5-690098513149\") " pod="kube-system/coredns-668d6bf9bc-zhnjv" Sep 9 23:44:54.414425 kubelet[3314]: I0909 23:44:54.413824 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm462\" (UniqueName: \"kubernetes.io/projected/fe8899e1-141a-4f8a-a885-c2f9947bdfb7-kube-api-access-pm462\") pod \"whisker-5f74d6545-f4l5c\" (UID: \"fe8899e1-141a-4f8a-a885-c2f9947bdfb7\") " pod="calico-system/whisker-5f74d6545-f4l5c" Sep 9 23:44:54.414425 kubelet[3314]: I0909 23:44:54.413866 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bf39971-e401-40db-bcc6-2f920685cab2-config-volume\") pod \"coredns-668d6bf9bc-qkrg2\" (UID: \"5bf39971-e401-40db-bcc6-2f920685cab2\") " pod="kube-system/coredns-668d6bf9bc-qkrg2" Sep 9 23:44:54.414425 kubelet[3314]: I0909 23:44:54.413912 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe8899e1-141a-4f8a-a885-c2f9947bdfb7-whisker-ca-bundle\") pod \"whisker-5f74d6545-f4l5c\" (UID: \"fe8899e1-141a-4f8a-a885-c2f9947bdfb7\") " pod="calico-system/whisker-5f74d6545-f4l5c" Sep 9 23:44:54.414974 kubelet[3314]: I0909 23:44:54.413951 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwcht\" (UniqueName: \"kubernetes.io/projected/5bf39971-e401-40db-bcc6-2f920685cab2-kube-api-access-hwcht\") pod \"coredns-668d6bf9bc-qkrg2\" (UID: \"5bf39971-e401-40db-bcc6-2f920685cab2\") " pod="kube-system/coredns-668d6bf9bc-qkrg2" Sep 9 23:44:54.414974 kubelet[3314]: I0909 23:44:54.413990 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdfzf\" (UniqueName: \"kubernetes.io/projected/0fde2978-d9a0-45f5-a0c2-85155fe6d2c1-kube-api-access-bdfzf\") pod \"calico-kube-controllers-5b9586dc6d-6msnh\" (UID: \"0fde2978-d9a0-45f5-a0c2-85155fe6d2c1\") " pod="calico-system/calico-kube-controllers-5b9586dc6d-6msnh" Sep 9 23:44:54.414974 kubelet[3314]: I0909 23:44:54.414032 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/89cd5784-8525-40b2-b9f2-e26328eb1dea-goldmane-key-pair\") pod \"goldmane-54d579b49d-p985n\" (UID: \"89cd5784-8525-40b2-b9f2-e26328eb1dea\") " pod="calico-system/goldmane-54d579b49d-p985n" Sep 9 23:44:54.414974 kubelet[3314]: I0909 23:44:54.414067 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q76w2\" (UniqueName: \"kubernetes.io/projected/89cd5784-8525-40b2-b9f2-e26328eb1dea-kube-api-access-q76w2\") pod \"goldmane-54d579b49d-p985n\" (UID: \"89cd5784-8525-40b2-b9f2-e26328eb1dea\") " pod="calico-system/goldmane-54d579b49d-p985n" Sep 9 23:44:54.414974 kubelet[3314]: I0909 23:44:54.414101 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fe8899e1-141a-4f8a-a885-c2f9947bdfb7-whisker-backend-key-pair\") pod \"whisker-5f74d6545-f4l5c\" (UID: \"fe8899e1-141a-4f8a-a885-c2f9947bdfb7\") " pod="calico-system/whisker-5f74d6545-f4l5c" Sep 9 23:44:54.415244 kubelet[3314]: I0909 23:44:54.414145 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fde2978-d9a0-45f5-a0c2-85155fe6d2c1-tigera-ca-bundle\") pod \"calico-kube-controllers-5b9586dc6d-6msnh\" (UID: \"0fde2978-d9a0-45f5-a0c2-85155fe6d2c1\") " pod="calico-system/calico-kube-controllers-5b9586dc6d-6msnh" Sep 9 23:44:54.415244 kubelet[3314]: I0909 23:44:54.414185 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqvpt\" (UniqueName: \"kubernetes.io/projected/61f33671-c702-4cca-aaa1-36fa26aa921f-kube-api-access-gqvpt\") pod \"calico-apiserver-755779d75d-47hsp\" (UID: \"61f33671-c702-4cca-aaa1-36fa26aa921f\") " pod="calico-apiserver/calico-apiserver-755779d75d-47hsp" Sep 9 23:44:54.415244 kubelet[3314]: I0909 23:44:54.414222 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8c73f8a-ee9c-4c93-8ca5-690098513149-config-volume\") pod \"coredns-668d6bf9bc-zhnjv\" (UID: \"c8c73f8a-ee9c-4c93-8ca5-690098513149\") " pod="kube-system/coredns-668d6bf9bc-zhnjv" Sep 9 23:44:54.415244 kubelet[3314]: I0909 23:44:54.414265 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/61f33671-c702-4cca-aaa1-36fa26aa921f-calico-apiserver-certs\") pod \"calico-apiserver-755779d75d-47hsp\" (UID: \"61f33671-c702-4cca-aaa1-36fa26aa921f\") " pod="calico-apiserver/calico-apiserver-755779d75d-47hsp" Sep 9 23:44:54.415244 kubelet[3314]: I0909 23:44:54.414302 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89cd5784-8525-40b2-b9f2-e26328eb1dea-config\") pod \"goldmane-54d579b49d-p985n\" (UID: \"89cd5784-8525-40b2-b9f2-e26328eb1dea\") " pod="calico-system/goldmane-54d579b49d-p985n" Sep 9 23:44:54.419087 kubelet[3314]: I0909 23:44:54.414343 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d827e045-efbc-4ba3-baf6-36db46b00e7b-calico-apiserver-certs\") pod \"calico-apiserver-755779d75d-f8ptg\" (UID: \"d827e045-efbc-4ba3-baf6-36db46b00e7b\") " pod="calico-apiserver/calico-apiserver-755779d75d-f8ptg" Sep 9 23:44:54.419087 kubelet[3314]: I0909 23:44:54.414431 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkmsc\" (UniqueName: \"kubernetes.io/projected/d827e045-efbc-4ba3-baf6-36db46b00e7b-kube-api-access-xkmsc\") pod \"calico-apiserver-755779d75d-f8ptg\" (UID: \"d827e045-efbc-4ba3-baf6-36db46b00e7b\") " pod="calico-apiserver/calico-apiserver-755779d75d-f8ptg" Sep 9 23:44:54.418145 systemd[1]: Created slice kubepods-besteffort-pod89cd5784_8525_40b2_b9f2_e26328eb1dea.slice - libcontainer container kubepods-besteffort-pod89cd5784_8525_40b2_b9f2_e26328eb1dea.slice. Sep 9 23:44:54.441545 systemd[1]: Created slice kubepods-besteffort-podd827e045_efbc_4ba3_baf6_36db46b00e7b.slice - libcontainer container kubepods-besteffort-podd827e045_efbc_4ba3_baf6_36db46b00e7b.slice. Sep 9 23:44:54.462974 systemd[1]: Created slice kubepods-besteffort-podfe8899e1_141a_4f8a_a885_c2f9947bdfb7.slice - libcontainer container kubepods-besteffort-podfe8899e1_141a_4f8a_a885_c2f9947bdfb7.slice. Sep 9 23:44:54.488067 systemd[1]: Created slice kubepods-besteffort-pod0fde2978_d9a0_45f5_a0c2_85155fe6d2c1.slice - libcontainer container kubepods-besteffort-pod0fde2978_d9a0_45f5_a0c2_85155fe6d2c1.slice. Sep 9 23:44:54.503968 systemd[1]: Created slice kubepods-burstable-podc8c73f8a_ee9c_4c93_8ca5_690098513149.slice - libcontainer container kubepods-burstable-podc8c73f8a_ee9c_4c93_8ca5_690098513149.slice. Sep 9 23:44:54.687809 containerd[2011]: time="2025-09-09T23:44:54.687676796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755779d75d-47hsp,Uid:61f33671-c702-4cca-aaa1-36fa26aa921f,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:44:54.704340 systemd[1]: Created slice kubepods-besteffort-pod6e7cc1ef_5e33_408e_bd7c_721420748437.slice - libcontainer container kubepods-besteffort-pod6e7cc1ef_5e33_408e_bd7c_721420748437.slice. Sep 9 23:44:54.708066 containerd[2011]: time="2025-09-09T23:44:54.708002786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qkrg2,Uid:5bf39971-e401-40db-bcc6-2f920685cab2,Namespace:kube-system,Attempt:0,}" Sep 9 23:44:54.715002 containerd[2011]: time="2025-09-09T23:44:54.714738188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-44bk5,Uid:6e7cc1ef-5e33-408e-bd7c-721420748437,Namespace:calico-system,Attempt:0,}" Sep 9 23:44:54.732937 containerd[2011]: time="2025-09-09T23:44:54.732858256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-p985n,Uid:89cd5784-8525-40b2-b9f2-e26328eb1dea,Namespace:calico-system,Attempt:0,}" Sep 9 23:44:54.748336 containerd[2011]: time="2025-09-09T23:44:54.748204818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755779d75d-f8ptg,Uid:d827e045-efbc-4ba3-baf6-36db46b00e7b,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:44:54.779633 containerd[2011]: time="2025-09-09T23:44:54.779546301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f74d6545-f4l5c,Uid:fe8899e1-141a-4f8a-a885-c2f9947bdfb7,Namespace:calico-system,Attempt:0,}" Sep 9 23:44:54.809157 containerd[2011]: time="2025-09-09T23:44:54.808896565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b9586dc6d-6msnh,Uid:0fde2978-d9a0-45f5-a0c2-85155fe6d2c1,Namespace:calico-system,Attempt:0,}" Sep 9 23:44:54.814479 containerd[2011]: time="2025-09-09T23:44:54.814420489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zhnjv,Uid:c8c73f8a-ee9c-4c93-8ca5-690098513149,Namespace:kube-system,Attempt:0,}" Sep 9 23:44:54.983984 containerd[2011]: time="2025-09-09T23:44:54.983458115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 23:44:55.242880 containerd[2011]: time="2025-09-09T23:44:55.242475669Z" level=error msg="Failed to destroy network for sandbox \"8e20a2e3cd63291074717d6e29c32e5d66ad19b32f240dc343e135c68afb3f00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.255167 containerd[2011]: time="2025-09-09T23:44:55.254996786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-44bk5,Uid:6e7cc1ef-5e33-408e-bd7c-721420748437,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e20a2e3cd63291074717d6e29c32e5d66ad19b32f240dc343e135c68afb3f00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.256414 kubelet[3314]: E0909 23:44:55.255638 3314 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e20a2e3cd63291074717d6e29c32e5d66ad19b32f240dc343e135c68afb3f00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.256414 kubelet[3314]: E0909 23:44:55.255731 3314 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e20a2e3cd63291074717d6e29c32e5d66ad19b32f240dc343e135c68afb3f00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-44bk5" Sep 9 23:44:55.256414 kubelet[3314]: E0909 23:44:55.255768 3314 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e20a2e3cd63291074717d6e29c32e5d66ad19b32f240dc343e135c68afb3f00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-44bk5" Sep 9 23:44:55.257130 kubelet[3314]: E0909 23:44:55.255833 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-44bk5_calico-system(6e7cc1ef-5e33-408e-bd7c-721420748437)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-44bk5_calico-system(6e7cc1ef-5e33-408e-bd7c-721420748437)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e20a2e3cd63291074717d6e29c32e5d66ad19b32f240dc343e135c68afb3f00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-44bk5" podUID="6e7cc1ef-5e33-408e-bd7c-721420748437" Sep 9 23:44:55.278655 containerd[2011]: time="2025-09-09T23:44:55.278567794Z" level=error msg="Failed to destroy network for sandbox \"469ed4d3884490c5d18ec47b8ae63b50a2345e7d5a9c4fe499bc4b6dfb3c9b16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.297579 containerd[2011]: time="2025-09-09T23:44:55.297490019Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755779d75d-f8ptg,Uid:d827e045-efbc-4ba3-baf6-36db46b00e7b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"469ed4d3884490c5d18ec47b8ae63b50a2345e7d5a9c4fe499bc4b6dfb3c9b16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.302016 kubelet[3314]: E0909 23:44:55.298093 3314 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469ed4d3884490c5d18ec47b8ae63b50a2345e7d5a9c4fe499bc4b6dfb3c9b16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.302016 kubelet[3314]: E0909 23:44:55.301490 3314 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469ed4d3884490c5d18ec47b8ae63b50a2345e7d5a9c4fe499bc4b6dfb3c9b16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-755779d75d-f8ptg" Sep 9 23:44:55.302016 kubelet[3314]: E0909 23:44:55.301542 3314 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469ed4d3884490c5d18ec47b8ae63b50a2345e7d5a9c4fe499bc4b6dfb3c9b16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-755779d75d-f8ptg" Sep 9 23:44:55.302294 kubelet[3314]: E0909 23:44:55.301621 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-755779d75d-f8ptg_calico-apiserver(d827e045-efbc-4ba3-baf6-36db46b00e7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-755779d75d-f8ptg_calico-apiserver(d827e045-efbc-4ba3-baf6-36db46b00e7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"469ed4d3884490c5d18ec47b8ae63b50a2345e7d5a9c4fe499bc4b6dfb3c9b16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-755779d75d-f8ptg" podUID="d827e045-efbc-4ba3-baf6-36db46b00e7b" Sep 9 23:44:55.310831 systemd[1]: run-netns-cni\x2d39104cab\x2db6c3\x2d8062\x2d1a62\x2dfba30050e494.mount: Deactivated successfully. Sep 9 23:44:55.355856 containerd[2011]: time="2025-09-09T23:44:55.355608676Z" level=error msg="Failed to destroy network for sandbox \"420f4ae6352368e7245aa4bc6b3aee21f1a32221b1d28f67cf37c319a2d6c4c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.361431 containerd[2011]: time="2025-09-09T23:44:55.360134950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755779d75d-47hsp,Uid:61f33671-c702-4cca-aaa1-36fa26aa921f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"420f4ae6352368e7245aa4bc6b3aee21f1a32221b1d28f67cf37c319a2d6c4c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.361022 systemd[1]: run-netns-cni\x2df4900f8b\x2d4db8\x2dcd7e\x2d2fa0\x2d0abeab4d74d6.mount: Deactivated successfully. Sep 9 23:44:55.364975 kubelet[3314]: E0909 23:44:55.364899 3314 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420f4ae6352368e7245aa4bc6b3aee21f1a32221b1d28f67cf37c319a2d6c4c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.365605 kubelet[3314]: E0909 23:44:55.365012 3314 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420f4ae6352368e7245aa4bc6b3aee21f1a32221b1d28f67cf37c319a2d6c4c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-755779d75d-47hsp" Sep 9 23:44:55.365605 kubelet[3314]: E0909 23:44:55.365070 3314 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420f4ae6352368e7245aa4bc6b3aee21f1a32221b1d28f67cf37c319a2d6c4c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-755779d75d-47hsp" Sep 9 23:44:55.365605 kubelet[3314]: E0909 23:44:55.365200 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-755779d75d-47hsp_calico-apiserver(61f33671-c702-4cca-aaa1-36fa26aa921f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-755779d75d-47hsp_calico-apiserver(61f33671-c702-4cca-aaa1-36fa26aa921f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"420f4ae6352368e7245aa4bc6b3aee21f1a32221b1d28f67cf37c319a2d6c4c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-755779d75d-47hsp" podUID="61f33671-c702-4cca-aaa1-36fa26aa921f" Sep 9 23:44:55.368804 containerd[2011]: time="2025-09-09T23:44:55.368664433Z" level=error msg="Failed to destroy network for sandbox \"87a1ca265f089ffc1bd41631702c5f679bfc10a82faf2d641d00f023e4212522\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.375938 systemd[1]: run-netns-cni\x2daa74b523\x2d7eff\x2ddfc3\x2d4c22\x2d6a34d3159099.mount: Deactivated successfully. Sep 9 23:44:55.383324 containerd[2011]: time="2025-09-09T23:44:55.383250810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zhnjv,Uid:c8c73f8a-ee9c-4c93-8ca5-690098513149,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a1ca265f089ffc1bd41631702c5f679bfc10a82faf2d641d00f023e4212522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.384084 kubelet[3314]: E0909 23:44:55.383825 3314 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a1ca265f089ffc1bd41631702c5f679bfc10a82faf2d641d00f023e4212522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.384084 kubelet[3314]: E0909 23:44:55.383907 3314 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a1ca265f089ffc1bd41631702c5f679bfc10a82faf2d641d00f023e4212522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zhnjv" Sep 9 23:44:55.384084 kubelet[3314]: E0909 23:44:55.383940 3314 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a1ca265f089ffc1bd41631702c5f679bfc10a82faf2d641d00f023e4212522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zhnjv" Sep 9 23:44:55.384351 kubelet[3314]: E0909 23:44:55.384020 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zhnjv_kube-system(c8c73f8a-ee9c-4c93-8ca5-690098513149)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zhnjv_kube-system(c8c73f8a-ee9c-4c93-8ca5-690098513149)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87a1ca265f089ffc1bd41631702c5f679bfc10a82faf2d641d00f023e4212522\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zhnjv" podUID="c8c73f8a-ee9c-4c93-8ca5-690098513149" Sep 9 23:44:55.388747 containerd[2011]: time="2025-09-09T23:44:55.388507217Z" level=error msg="Failed to destroy network for sandbox \"4a0a1e93ab751c05cf2d3116c9fd7b8870ab8d44025f14c1a890a6f7a970469a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.391998 containerd[2011]: time="2025-09-09T23:44:55.391929059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-p985n,Uid:89cd5784-8525-40b2-b9f2-e26328eb1dea,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a0a1e93ab751c05cf2d3116c9fd7b8870ab8d44025f14c1a890a6f7a970469a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.395401 kubelet[3314]: E0909 23:44:55.393567 3314 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a0a1e93ab751c05cf2d3116c9fd7b8870ab8d44025f14c1a890a6f7a970469a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.395401 kubelet[3314]: E0909 23:44:55.393642 3314 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a0a1e93ab751c05cf2d3116c9fd7b8870ab8d44025f14c1a890a6f7a970469a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-p985n" Sep 9 23:44:55.395401 kubelet[3314]: E0909 23:44:55.393675 3314 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a0a1e93ab751c05cf2d3116c9fd7b8870ab8d44025f14c1a890a6f7a970469a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-p985n" Sep 9 23:44:55.395659 kubelet[3314]: E0909 23:44:55.393742 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-p985n_calico-system(89cd5784-8525-40b2-b9f2-e26328eb1dea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-p985n_calico-system(89cd5784-8525-40b2-b9f2-e26328eb1dea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a0a1e93ab751c05cf2d3116c9fd7b8870ab8d44025f14c1a890a6f7a970469a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-p985n" podUID="89cd5784-8525-40b2-b9f2-e26328eb1dea" Sep 9 23:44:55.395433 systemd[1]: run-netns-cni\x2dd56fe9e8\x2d63dc\x2d0e88\x2d7310\x2d9edadab5c032.mount: Deactivated successfully. Sep 9 23:44:55.405410 containerd[2011]: time="2025-09-09T23:44:55.405041364Z" level=error msg="Failed to destroy network for sandbox \"6219fa4ff403121d898acfe7ae1f104b0e8dc89a3c490c234d314b10e18d827f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.407914 containerd[2011]: time="2025-09-09T23:44:55.407831847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qkrg2,Uid:5bf39971-e401-40db-bcc6-2f920685cab2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6219fa4ff403121d898acfe7ae1f104b0e8dc89a3c490c234d314b10e18d827f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.408998 kubelet[3314]: E0909 23:44:55.408481 3314 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6219fa4ff403121d898acfe7ae1f104b0e8dc89a3c490c234d314b10e18d827f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.408998 kubelet[3314]: E0909 23:44:55.408567 3314 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6219fa4ff403121d898acfe7ae1f104b0e8dc89a3c490c234d314b10e18d827f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qkrg2" Sep 9 23:44:55.408998 kubelet[3314]: E0909 23:44:55.408602 3314 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6219fa4ff403121d898acfe7ae1f104b0e8dc89a3c490c234d314b10e18d827f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qkrg2" Sep 9 23:44:55.409258 kubelet[3314]: E0909 23:44:55.408681 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qkrg2_kube-system(5bf39971-e401-40db-bcc6-2f920685cab2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qkrg2_kube-system(5bf39971-e401-40db-bcc6-2f920685cab2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6219fa4ff403121d898acfe7ae1f104b0e8dc89a3c490c234d314b10e18d827f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qkrg2" podUID="5bf39971-e401-40db-bcc6-2f920685cab2" Sep 9 23:44:55.412963 containerd[2011]: time="2025-09-09T23:44:55.412874787Z" level=error msg="Failed to destroy network for sandbox \"fd1e77fb905070e6501b43488c35fcd88f4e1e5ff6a444986ca71c1f5f92ac84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.416577 containerd[2011]: time="2025-09-09T23:44:55.416481293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f74d6545-f4l5c,Uid:fe8899e1-141a-4f8a-a885-c2f9947bdfb7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd1e77fb905070e6501b43488c35fcd88f4e1e5ff6a444986ca71c1f5f92ac84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.416957 kubelet[3314]: E0909 23:44:55.416806 3314 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd1e77fb905070e6501b43488c35fcd88f4e1e5ff6a444986ca71c1f5f92ac84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.416957 kubelet[3314]: E0909 23:44:55.416883 3314 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd1e77fb905070e6501b43488c35fcd88f4e1e5ff6a444986ca71c1f5f92ac84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f74d6545-f4l5c" Sep 9 23:44:55.416957 kubelet[3314]: E0909 23:44:55.416918 3314 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd1e77fb905070e6501b43488c35fcd88f4e1e5ff6a444986ca71c1f5f92ac84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f74d6545-f4l5c" Sep 9 23:44:55.417267 kubelet[3314]: E0909 23:44:55.416998 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f74d6545-f4l5c_calico-system(fe8899e1-141a-4f8a-a885-c2f9947bdfb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f74d6545-f4l5c_calico-system(fe8899e1-141a-4f8a-a885-c2f9947bdfb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd1e77fb905070e6501b43488c35fcd88f4e1e5ff6a444986ca71c1f5f92ac84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f74d6545-f4l5c" podUID="fe8899e1-141a-4f8a-a885-c2f9947bdfb7" Sep 9 23:44:55.420230 containerd[2011]: time="2025-09-09T23:44:55.420156654Z" level=error msg="Failed to destroy network for sandbox \"2933526b2b089208b5f9db67ad82888965c77cb974a6f57759ccd76b9ec1bc2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.421943 containerd[2011]: time="2025-09-09T23:44:55.421725934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b9586dc6d-6msnh,Uid:0fde2978-d9a0-45f5-a0c2-85155fe6d2c1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2933526b2b089208b5f9db67ad82888965c77cb974a6f57759ccd76b9ec1bc2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.422768 kubelet[3314]: E0909 23:44:55.422459 3314 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2933526b2b089208b5f9db67ad82888965c77cb974a6f57759ccd76b9ec1bc2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:44:55.422768 kubelet[3314]: E0909 23:44:55.422541 3314 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2933526b2b089208b5f9db67ad82888965c77cb974a6f57759ccd76b9ec1bc2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b9586dc6d-6msnh" Sep 9 23:44:55.422768 kubelet[3314]: E0909 23:44:55.422575 3314 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2933526b2b089208b5f9db67ad82888965c77cb974a6f57759ccd76b9ec1bc2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b9586dc6d-6msnh" Sep 9 23:44:55.423075 kubelet[3314]: E0909 23:44:55.422672 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b9586dc6d-6msnh_calico-system(0fde2978-d9a0-45f5-a0c2-85155fe6d2c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b9586dc6d-6msnh_calico-system(0fde2978-d9a0-45f5-a0c2-85155fe6d2c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2933526b2b089208b5f9db67ad82888965c77cb974a6f57759ccd76b9ec1bc2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b9586dc6d-6msnh" podUID="0fde2978-d9a0-45f5-a0c2-85155fe6d2c1" Sep 9 23:44:56.032773 kubelet[3314]: I0909 23:44:56.032129 3314 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:44:56.264065 systemd[1]: run-netns-cni\x2d664e5ce4\x2d93dd\x2d9bf0\x2df369\x2d4465ae298512.mount: Deactivated successfully. Sep 9 23:44:56.264301 systemd[1]: run-netns-cni\x2d595c9e3b\x2d949d\x2d2e59\x2dd73b\x2d65d9566d7833.mount: Deactivated successfully. Sep 9 23:44:56.264450 systemd[1]: run-netns-cni\x2dff2d841c\x2d29a8\x2d4875\x2dcf4b\x2d1d6efc8e453f.mount: Deactivated successfully. Sep 9 23:45:01.072479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount359209371.mount: Deactivated successfully. Sep 9 23:45:01.135804 containerd[2011]: time="2025-09-09T23:45:01.135611241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:01.138006 containerd[2011]: time="2025-09-09T23:45:01.137963637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 9 23:45:01.139833 containerd[2011]: time="2025-09-09T23:45:01.139756589Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:01.144572 containerd[2011]: time="2025-09-09T23:45:01.144485464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:01.145887 containerd[2011]: time="2025-09-09T23:45:01.145612431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 6.162071439s" Sep 9 23:45:01.145887 containerd[2011]: time="2025-09-09T23:45:01.145675211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 9 23:45:01.176706 containerd[2011]: time="2025-09-09T23:45:01.176599389Z" level=info msg="CreateContainer within sandbox \"71fda0a271d54346d756cbfa6819f39f29090eade7a409746b2f3444beb806b3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 23:45:01.195402 containerd[2011]: time="2025-09-09T23:45:01.193710172Z" level=info msg="Container 28a9f5296fae2315b433fc475c9acb4c13029248f74d8215fbd8db2c901f65ef: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:01.204298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3644204824.mount: Deactivated successfully. Sep 9 23:45:01.226752 containerd[2011]: time="2025-09-09T23:45:01.226673150Z" level=info msg="CreateContainer within sandbox \"71fda0a271d54346d756cbfa6819f39f29090eade7a409746b2f3444beb806b3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"28a9f5296fae2315b433fc475c9acb4c13029248f74d8215fbd8db2c901f65ef\"" Sep 9 23:45:01.228157 containerd[2011]: time="2025-09-09T23:45:01.228098610Z" level=info msg="StartContainer for \"28a9f5296fae2315b433fc475c9acb4c13029248f74d8215fbd8db2c901f65ef\"" Sep 9 23:45:01.231417 containerd[2011]: time="2025-09-09T23:45:01.231322461Z" level=info msg="connecting to shim 28a9f5296fae2315b433fc475c9acb4c13029248f74d8215fbd8db2c901f65ef" address="unix:///run/containerd/s/8c9c2de84ec6cfaacd08c31a161930fd38c26f0ac1ff3419961ecef86498926a" protocol=ttrpc version=3 Sep 9 23:45:01.273717 systemd[1]: Started cri-containerd-28a9f5296fae2315b433fc475c9acb4c13029248f74d8215fbd8db2c901f65ef.scope - libcontainer container 28a9f5296fae2315b433fc475c9acb4c13029248f74d8215fbd8db2c901f65ef. Sep 9 23:45:01.362067 containerd[2011]: time="2025-09-09T23:45:01.361469976Z" level=info msg="StartContainer for \"28a9f5296fae2315b433fc475c9acb4c13029248f74d8215fbd8db2c901f65ef\" returns successfully" Sep 9 23:45:01.609194 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 23:45:01.609323 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 23:45:01.880316 kubelet[3314]: I0909 23:45:01.879567 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fe8899e1-141a-4f8a-a885-c2f9947bdfb7-whisker-backend-key-pair\") pod \"fe8899e1-141a-4f8a-a885-c2f9947bdfb7\" (UID: \"fe8899e1-141a-4f8a-a885-c2f9947bdfb7\") " Sep 9 23:45:01.880316 kubelet[3314]: I0909 23:45:01.879633 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe8899e1-141a-4f8a-a885-c2f9947bdfb7-whisker-ca-bundle\") pod \"fe8899e1-141a-4f8a-a885-c2f9947bdfb7\" (UID: \"fe8899e1-141a-4f8a-a885-c2f9947bdfb7\") " Sep 9 23:45:01.880316 kubelet[3314]: I0909 23:45:01.879695 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm462\" (UniqueName: \"kubernetes.io/projected/fe8899e1-141a-4f8a-a885-c2f9947bdfb7-kube-api-access-pm462\") pod \"fe8899e1-141a-4f8a-a885-c2f9947bdfb7\" (UID: \"fe8899e1-141a-4f8a-a885-c2f9947bdfb7\") " Sep 9 23:45:01.888336 kubelet[3314]: I0909 23:45:01.887705 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe8899e1-141a-4f8a-a885-c2f9947bdfb7-kube-api-access-pm462" (OuterVolumeSpecName: "kube-api-access-pm462") pod "fe8899e1-141a-4f8a-a885-c2f9947bdfb7" (UID: "fe8899e1-141a-4f8a-a885-c2f9947bdfb7"). InnerVolumeSpecName "kube-api-access-pm462". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:45:01.888336 kubelet[3314]: I0909 23:45:01.888272 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe8899e1-141a-4f8a-a885-c2f9947bdfb7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fe8899e1-141a-4f8a-a885-c2f9947bdfb7" (UID: "fe8899e1-141a-4f8a-a885-c2f9947bdfb7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 23:45:01.889639 kubelet[3314]: I0909 23:45:01.889587 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8899e1-141a-4f8a-a885-c2f9947bdfb7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fe8899e1-141a-4f8a-a885-c2f9947bdfb7" (UID: "fe8899e1-141a-4f8a-a885-c2f9947bdfb7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 23:45:01.981123 kubelet[3314]: I0909 23:45:01.981078 3314 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fe8899e1-141a-4f8a-a885-c2f9947bdfb7-whisker-backend-key-pair\") on node \"ip-172-31-18-64\" DevicePath \"\"" Sep 9 23:45:01.981345 kubelet[3314]: I0909 23:45:01.981323 3314 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe8899e1-141a-4f8a-a885-c2f9947bdfb7-whisker-ca-bundle\") on node \"ip-172-31-18-64\" DevicePath \"\"" Sep 9 23:45:01.981521 kubelet[3314]: I0909 23:45:01.981488 3314 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pm462\" (UniqueName: \"kubernetes.io/projected/fe8899e1-141a-4f8a-a885-c2f9947bdfb7-kube-api-access-pm462\") on node \"ip-172-31-18-64\" DevicePath \"\"" Sep 9 23:45:02.040092 systemd[1]: Removed slice kubepods-besteffort-podfe8899e1_141a_4f8a_a885_c2f9947bdfb7.slice - libcontainer container kubepods-besteffort-podfe8899e1_141a_4f8a_a885_c2f9947bdfb7.slice. Sep 9 23:45:02.075166 kubelet[3314]: I0909 23:45:02.073639 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7dq62" podStartSLOduration=1.718598877 podStartE2EDuration="18.0736079s" podCreationTimestamp="2025-09-09 23:44:44 +0000 UTC" firstStartedPulling="2025-09-09 23:44:44.792872446 +0000 UTC m=+28.367691624" lastFinishedPulling="2025-09-09 23:45:01.147881469 +0000 UTC m=+44.722700647" observedRunningTime="2025-09-09 23:45:02.064830709 +0000 UTC m=+45.639649911" watchObservedRunningTime="2025-09-09 23:45:02.0736079 +0000 UTC m=+45.648427078" Sep 9 23:45:02.079849 systemd[1]: var-lib-kubelet-pods-fe8899e1\x2d141a\x2d4f8a\x2da885\x2dc2f9947bdfb7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpm462.mount: Deactivated successfully. Sep 9 23:45:02.080329 systemd[1]: var-lib-kubelet-pods-fe8899e1\x2d141a\x2d4f8a\x2da885\x2dc2f9947bdfb7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 23:45:02.215229 systemd[1]: Created slice kubepods-besteffort-pode0d9e2dd_4204_47c4_ba90_171565fc628c.slice - libcontainer container kubepods-besteffort-pode0d9e2dd_4204_47c4_ba90_171565fc628c.slice. Sep 9 23:45:02.283882 kubelet[3314]: I0909 23:45:02.283793 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0d9e2dd-4204-47c4-ba90-171565fc628c-whisker-backend-key-pair\") pod \"whisker-76d47f499b-z5qps\" (UID: \"e0d9e2dd-4204-47c4-ba90-171565fc628c\") " pod="calico-system/whisker-76d47f499b-z5qps" Sep 9 23:45:02.284043 kubelet[3314]: I0909 23:45:02.283903 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w77tf\" (UniqueName: \"kubernetes.io/projected/e0d9e2dd-4204-47c4-ba90-171565fc628c-kube-api-access-w77tf\") pod \"whisker-76d47f499b-z5qps\" (UID: \"e0d9e2dd-4204-47c4-ba90-171565fc628c\") " pod="calico-system/whisker-76d47f499b-z5qps" Sep 9 23:45:02.284043 kubelet[3314]: I0909 23:45:02.283944 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0d9e2dd-4204-47c4-ba90-171565fc628c-whisker-ca-bundle\") pod \"whisker-76d47f499b-z5qps\" (UID: \"e0d9e2dd-4204-47c4-ba90-171565fc628c\") " pod="calico-system/whisker-76d47f499b-z5qps" Sep 9 23:45:02.326443 containerd[2011]: time="2025-09-09T23:45:02.326337407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28a9f5296fae2315b433fc475c9acb4c13029248f74d8215fbd8db2c901f65ef\" id:\"936e08297c5038e2fbb384db5dc5e411a7f83a60b82f2614689ee3cd06adc78c\" pid:4581 exit_status:1 exited_at:{seconds:1757461502 nanos:325915636}" Sep 9 23:45:02.525734 containerd[2011]: time="2025-09-09T23:45:02.525215874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76d47f499b-z5qps,Uid:e0d9e2dd-4204-47c4-ba90-171565fc628c,Namespace:calico-system,Attempt:0,}" Sep 9 23:45:02.679752 kubelet[3314]: I0909 23:45:02.679703 3314 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe8899e1-141a-4f8a-a885-c2f9947bdfb7" path="/var/lib/kubelet/pods/fe8899e1-141a-4f8a-a885-c2f9947bdfb7/volumes" Sep 9 23:45:02.841407 (udev-worker)[4553]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:45:02.845549 systemd-networkd[1904]: cali12d61d3f8ea: Link UP Sep 9 23:45:02.847626 systemd-networkd[1904]: cali12d61d3f8ea: Gained carrier Sep 9 23:45:02.879988 containerd[2011]: 2025-09-09 23:45:02.574 [INFO][4605] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 23:45:02.879988 containerd[2011]: 2025-09-09 23:45:02.668 [INFO][4605] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--64-k8s-whisker--76d47f499b--z5qps-eth0 whisker-76d47f499b- calico-system e0d9e2dd-4204-47c4-ba90-171565fc628c 913 0 2025-09-09 23:45:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76d47f499b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-64 whisker-76d47f499b-z5qps eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali12d61d3f8ea [] [] }} ContainerID="f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" Namespace="calico-system" Pod="whisker-76d47f499b-z5qps" WorkloadEndpoint="ip--172--31--18--64-k8s-whisker--76d47f499b--z5qps-" Sep 9 23:45:02.879988 containerd[2011]: 2025-09-09 23:45:02.668 [INFO][4605] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" Namespace="calico-system" Pod="whisker-76d47f499b-z5qps" WorkloadEndpoint="ip--172--31--18--64-k8s-whisker--76d47f499b--z5qps-eth0" Sep 9 23:45:02.879988 containerd[2011]: 2025-09-09 23:45:02.753 [INFO][4618] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" HandleID="k8s-pod-network.f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" Workload="ip--172--31--18--64-k8s-whisker--76d47f499b--z5qps-eth0" Sep 9 23:45:02.880457 containerd[2011]: 2025-09-09 23:45:02.753 [INFO][4618] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" HandleID="k8s-pod-network.f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" Workload="ip--172--31--18--64-k8s-whisker--76d47f499b--z5qps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038cd50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-64", "pod":"whisker-76d47f499b-z5qps", "timestamp":"2025-09-09 23:45:02.753720491 +0000 UTC"}, Hostname:"ip-172-31-18-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:45:02.880457 containerd[2011]: 2025-09-09 23:45:02.754 [INFO][4618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:45:02.880457 containerd[2011]: 2025-09-09 23:45:02.754 [INFO][4618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:45:02.880457 containerd[2011]: 2025-09-09 23:45:02.754 [INFO][4618] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-64' Sep 9 23:45:02.880457 containerd[2011]: 2025-09-09 23:45:02.769 [INFO][4618] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" host="ip-172-31-18-64" Sep 9 23:45:02.880457 containerd[2011]: 2025-09-09 23:45:02.781 [INFO][4618] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-64" Sep 9 23:45:02.880457 containerd[2011]: 2025-09-09 23:45:02.791 [INFO][4618] ipam/ipam.go 511: Trying affinity for 192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:02.880457 containerd[2011]: 2025-09-09 23:45:02.794 [INFO][4618] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:02.880457 containerd[2011]: 2025-09-09 23:45:02.797 [INFO][4618] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:02.880457 containerd[2011]: 2025-09-09 23:45:02.797 [INFO][4618] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.127.0/26 handle="k8s-pod-network.f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" host="ip-172-31-18-64" Sep 9 23:45:02.881094 containerd[2011]: 2025-09-09 23:45:02.800 [INFO][4618] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7 Sep 9 23:45:02.881094 containerd[2011]: 2025-09-09 23:45:02.807 [INFO][4618] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.127.0/26 handle="k8s-pod-network.f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" host="ip-172-31-18-64" Sep 9 23:45:02.881094 containerd[2011]: 2025-09-09 23:45:02.818 [INFO][4618] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.127.1/26] block=192.168.127.0/26 handle="k8s-pod-network.f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" host="ip-172-31-18-64" Sep 9 23:45:02.881094 containerd[2011]: 2025-09-09 23:45:02.819 [INFO][4618] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.1/26] handle="k8s-pod-network.f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" host="ip-172-31-18-64" Sep 9 23:45:02.881094 containerd[2011]: 2025-09-09 23:45:02.819 [INFO][4618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:45:02.881094 containerd[2011]: 2025-09-09 23:45:02.819 [INFO][4618] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.127.1/26] IPv6=[] ContainerID="f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" HandleID="k8s-pod-network.f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" Workload="ip--172--31--18--64-k8s-whisker--76d47f499b--z5qps-eth0" Sep 9 23:45:02.881359 containerd[2011]: 2025-09-09 23:45:02.827 [INFO][4605] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" Namespace="calico-system" Pod="whisker-76d47f499b-z5qps" WorkloadEndpoint="ip--172--31--18--64-k8s-whisker--76d47f499b--z5qps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--64-k8s-whisker--76d47f499b--z5qps-eth0", GenerateName:"whisker-76d47f499b-", Namespace:"calico-system", SelfLink:"", UID:"e0d9e2dd-4204-47c4-ba90-171565fc628c", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76d47f499b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-64", ContainerID:"", Pod:"whisker-76d47f499b-z5qps", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.127.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali12d61d3f8ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:02.881359 containerd[2011]: 2025-09-09 23:45:02.827 [INFO][4605] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.1/32] ContainerID="f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" Namespace="calico-system" Pod="whisker-76d47f499b-z5qps" WorkloadEndpoint="ip--172--31--18--64-k8s-whisker--76d47f499b--z5qps-eth0" Sep 9 23:45:02.881612 containerd[2011]: 2025-09-09 23:45:02.827 [INFO][4605] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12d61d3f8ea ContainerID="f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" Namespace="calico-system" Pod="whisker-76d47f499b-z5qps" WorkloadEndpoint="ip--172--31--18--64-k8s-whisker--76d47f499b--z5qps-eth0" Sep 9 23:45:02.881612 containerd[2011]: 2025-09-09 23:45:02.848 [INFO][4605] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" Namespace="calico-system" Pod="whisker-76d47f499b-z5qps" WorkloadEndpoint="ip--172--31--18--64-k8s-whisker--76d47f499b--z5qps-eth0" Sep 9 23:45:02.881748 containerd[2011]: 2025-09-09 23:45:02.849 [INFO][4605] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" Namespace="calico-system" Pod="whisker-76d47f499b-z5qps" WorkloadEndpoint="ip--172--31--18--64-k8s-whisker--76d47f499b--z5qps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--64-k8s-whisker--76d47f499b--z5qps-eth0", GenerateName:"whisker-76d47f499b-", Namespace:"calico-system", SelfLink:"", UID:"e0d9e2dd-4204-47c4-ba90-171565fc628c", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76d47f499b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-64", ContainerID:"f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7", Pod:"whisker-76d47f499b-z5qps", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.127.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali12d61d3f8ea", MAC:"12:15:d9:ad:13:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:02.881868 containerd[2011]: 2025-09-09 23:45:02.873 [INFO][4605] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" Namespace="calico-system" Pod="whisker-76d47f499b-z5qps" WorkloadEndpoint="ip--172--31--18--64-k8s-whisker--76d47f499b--z5qps-eth0" Sep 9 23:45:02.926343 containerd[2011]: time="2025-09-09T23:45:02.926252247Z" level=info msg="connecting to shim f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7" address="unix:///run/containerd/s/84ea5ae924643e068a48bd2c4b4bfba2d40ff631a27d49426b2436a468a78dbd" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:02.969743 systemd[1]: Started cri-containerd-f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7.scope - libcontainer container f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7. Sep 9 23:45:03.064199 containerd[2011]: time="2025-09-09T23:45:03.064024459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76d47f499b-z5qps,Uid:e0d9e2dd-4204-47c4-ba90-171565fc628c,Namespace:calico-system,Attempt:0,} returns sandbox id \"f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7\"" Sep 9 23:45:03.069963 containerd[2011]: time="2025-09-09T23:45:03.069890675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 23:45:03.187203 containerd[2011]: time="2025-09-09T23:45:03.186992553Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28a9f5296fae2315b433fc475c9acb4c13029248f74d8215fbd8db2c901f65ef\" id:\"55694c64474bd0e8abe80cefb2e2c7afe6436cf7117a21696a85bedbcff0f2b9\" pid:4688 exit_status:1 exited_at:{seconds:1757461503 nanos:183651656}" Sep 9 23:45:04.006669 systemd-networkd[1904]: cali12d61d3f8ea: Gained IPv6LL Sep 9 23:45:04.740086 systemd-networkd[1904]: vxlan.calico: Link UP Sep 9 23:45:04.740110 systemd-networkd[1904]: vxlan.calico: Gained carrier Sep 9 23:45:04.825607 (udev-worker)[4556]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:45:05.023896 containerd[2011]: time="2025-09-09T23:45:05.023582358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:05.028938 containerd[2011]: time="2025-09-09T23:45:05.028880354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 9 23:45:05.031484 containerd[2011]: time="2025-09-09T23:45:05.030408742Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:05.034793 containerd[2011]: time="2025-09-09T23:45:05.034681689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:05.036536 containerd[2011]: time="2025-09-09T23:45:05.036490969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.966417011s" Sep 9 23:45:05.036683 containerd[2011]: time="2025-09-09T23:45:05.036654167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 9 23:45:05.043547 containerd[2011]: time="2025-09-09T23:45:05.043488450Z" level=info msg="CreateContainer within sandbox \"f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 23:45:05.065406 containerd[2011]: time="2025-09-09T23:45:05.064585166Z" level=info msg="Container f0ed87b61cb8bdade135710a6e6884bf9733a45bc30a3b4f7718eb5eefb0980f: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:05.073446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3636826052.mount: Deactivated successfully. Sep 9 23:45:05.091499 containerd[2011]: time="2025-09-09T23:45:05.091433871Z" level=info msg="CreateContainer within sandbox \"f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f0ed87b61cb8bdade135710a6e6884bf9733a45bc30a3b4f7718eb5eefb0980f\"" Sep 9 23:45:05.094125 containerd[2011]: time="2025-09-09T23:45:05.093978135Z" level=info msg="StartContainer for \"f0ed87b61cb8bdade135710a6e6884bf9733a45bc30a3b4f7718eb5eefb0980f\"" Sep 9 23:45:05.100576 containerd[2011]: time="2025-09-09T23:45:05.100502316Z" level=info msg="connecting to shim f0ed87b61cb8bdade135710a6e6884bf9733a45bc30a3b4f7718eb5eefb0980f" address="unix:///run/containerd/s/84ea5ae924643e068a48bd2c4b4bfba2d40ff631a27d49426b2436a468a78dbd" protocol=ttrpc version=3 Sep 9 23:45:05.150709 systemd[1]: Started cri-containerd-f0ed87b61cb8bdade135710a6e6884bf9733a45bc30a3b4f7718eb5eefb0980f.scope - libcontainer container f0ed87b61cb8bdade135710a6e6884bf9733a45bc30a3b4f7718eb5eefb0980f. Sep 9 23:45:05.243720 containerd[2011]: time="2025-09-09T23:45:05.243666387Z" level=info msg="StartContainer for \"f0ed87b61cb8bdade135710a6e6884bf9733a45bc30a3b4f7718eb5eefb0980f\" returns successfully" Sep 9 23:45:05.252854 containerd[2011]: time="2025-09-09T23:45:05.252184656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 23:45:05.671079 containerd[2011]: time="2025-09-09T23:45:05.670946807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-44bk5,Uid:6e7cc1ef-5e33-408e-bd7c-721420748437,Namespace:calico-system,Attempt:0,}" Sep 9 23:45:05.876781 systemd-networkd[1904]: cali828d05c88e8: Link UP Sep 9 23:45:05.879069 systemd-networkd[1904]: cali828d05c88e8: Gained carrier Sep 9 23:45:05.928686 containerd[2011]: 2025-09-09 23:45:05.748 [INFO][4935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--64-k8s-csi--node--driver--44bk5-eth0 csi-node-driver- calico-system 6e7cc1ef-5e33-408e-bd7c-721420748437 722 0 2025-09-09 23:44:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-64 csi-node-driver-44bk5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali828d05c88e8 [] [] }} ContainerID="c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" Namespace="calico-system" Pod="csi-node-driver-44bk5" WorkloadEndpoint="ip--172--31--18--64-k8s-csi--node--driver--44bk5-" Sep 9 23:45:05.928686 containerd[2011]: 2025-09-09 23:45:05.748 [INFO][4935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" Namespace="calico-system" Pod="csi-node-driver-44bk5" WorkloadEndpoint="ip--172--31--18--64-k8s-csi--node--driver--44bk5-eth0" Sep 9 23:45:05.928686 containerd[2011]: 2025-09-09 23:45:05.801 [INFO][4947] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" HandleID="k8s-pod-network.c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" Workload="ip--172--31--18--64-k8s-csi--node--driver--44bk5-eth0" Sep 9 23:45:05.928964 containerd[2011]: 2025-09-09 23:45:05.801 [INFO][4947] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" HandleID="k8s-pod-network.c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" Workload="ip--172--31--18--64-k8s-csi--node--driver--44bk5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3640), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-64", "pod":"csi-node-driver-44bk5", "timestamp":"2025-09-09 23:45:05.801315196 +0000 UTC"}, Hostname:"ip-172-31-18-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:45:05.928964 containerd[2011]: 2025-09-09 23:45:05.802 [INFO][4947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:45:05.928964 containerd[2011]: 2025-09-09 23:45:05.802 [INFO][4947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:45:05.928964 containerd[2011]: 2025-09-09 23:45:05.802 [INFO][4947] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-64' Sep 9 23:45:05.928964 containerd[2011]: 2025-09-09 23:45:05.818 [INFO][4947] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" host="ip-172-31-18-64" Sep 9 23:45:05.928964 containerd[2011]: 2025-09-09 23:45:05.825 [INFO][4947] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-64" Sep 9 23:45:05.928964 containerd[2011]: 2025-09-09 23:45:05.836 [INFO][4947] ipam/ipam.go 511: Trying affinity for 192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:05.928964 containerd[2011]: 2025-09-09 23:45:05.839 [INFO][4947] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:05.928964 containerd[2011]: 2025-09-09 23:45:05.843 [INFO][4947] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:05.928964 containerd[2011]: 2025-09-09 23:45:05.843 [INFO][4947] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.127.0/26 handle="k8s-pod-network.c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" host="ip-172-31-18-64" Sep 9 23:45:05.930891 containerd[2011]: 2025-09-09 23:45:05.846 [INFO][4947] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b Sep 9 23:45:05.930891 containerd[2011]: 2025-09-09 23:45:05.853 [INFO][4947] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.127.0/26 handle="k8s-pod-network.c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" host="ip-172-31-18-64" Sep 9 23:45:05.930891 containerd[2011]: 2025-09-09 23:45:05.866 [INFO][4947] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.127.2/26] block=192.168.127.0/26 handle="k8s-pod-network.c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" host="ip-172-31-18-64" Sep 9 23:45:05.930891 containerd[2011]: 2025-09-09 23:45:05.866 [INFO][4947] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.2/26] handle="k8s-pod-network.c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" host="ip-172-31-18-64" Sep 9 23:45:05.930891 containerd[2011]: 2025-09-09 23:45:05.866 [INFO][4947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:45:05.930891 containerd[2011]: 2025-09-09 23:45:05.866 [INFO][4947] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.127.2/26] IPv6=[] ContainerID="c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" HandleID="k8s-pod-network.c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" Workload="ip--172--31--18--64-k8s-csi--node--driver--44bk5-eth0" Sep 9 23:45:05.931190 containerd[2011]: 2025-09-09 23:45:05.870 [INFO][4935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" Namespace="calico-system" Pod="csi-node-driver-44bk5" WorkloadEndpoint="ip--172--31--18--64-k8s-csi--node--driver--44bk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--64-k8s-csi--node--driver--44bk5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e7cc1ef-5e33-408e-bd7c-721420748437", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-64", ContainerID:"", Pod:"csi-node-driver-44bk5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali828d05c88e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:05.931327 containerd[2011]: 2025-09-09 23:45:05.870 [INFO][4935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.2/32] ContainerID="c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" Namespace="calico-system" Pod="csi-node-driver-44bk5" WorkloadEndpoint="ip--172--31--18--64-k8s-csi--node--driver--44bk5-eth0" Sep 9 23:45:05.931327 containerd[2011]: 2025-09-09 23:45:05.870 [INFO][4935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali828d05c88e8 ContainerID="c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" Namespace="calico-system" Pod="csi-node-driver-44bk5" WorkloadEndpoint="ip--172--31--18--64-k8s-csi--node--driver--44bk5-eth0" Sep 9 23:45:05.931327 containerd[2011]: 2025-09-09 23:45:05.880 [INFO][4935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" Namespace="calico-system" Pod="csi-node-driver-44bk5" WorkloadEndpoint="ip--172--31--18--64-k8s-csi--node--driver--44bk5-eth0" Sep 9 23:45:05.932537 containerd[2011]: 2025-09-09 23:45:05.883 [INFO][4935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" Namespace="calico-system" Pod="csi-node-driver-44bk5" WorkloadEndpoint="ip--172--31--18--64-k8s-csi--node--driver--44bk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--64-k8s-csi--node--driver--44bk5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e7cc1ef-5e33-408e-bd7c-721420748437", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-64", ContainerID:"c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b", Pod:"csi-node-driver-44bk5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali828d05c88e8", MAC:"1e:e6:c9:05:31:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:05.932829 containerd[2011]: 2025-09-09 23:45:05.923 [INFO][4935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" Namespace="calico-system" Pod="csi-node-driver-44bk5" WorkloadEndpoint="ip--172--31--18--64-k8s-csi--node--driver--44bk5-eth0" Sep 9 23:45:05.984722 containerd[2011]: time="2025-09-09T23:45:05.984515387Z" level=info msg="connecting to shim c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b" address="unix:///run/containerd/s/7fdac8837c89fcd63aaee2ff71f809f46e7d7ded4aaf1e3d1feef0c8daeebc15" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:06.068712 systemd[1]: Started cri-containerd-c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b.scope - libcontainer container c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b. Sep 9 23:45:06.132515 containerd[2011]: time="2025-09-09T23:45:06.132359505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-44bk5,Uid:6e7cc1ef-5e33-408e-bd7c-721420748437,Namespace:calico-system,Attempt:0,} returns sandbox id \"c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b\"" Sep 9 23:45:06.566605 systemd-networkd[1904]: vxlan.calico: Gained IPv6LL Sep 9 23:45:06.671801 containerd[2011]: time="2025-09-09T23:45:06.671445814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qkrg2,Uid:5bf39971-e401-40db-bcc6-2f920685cab2,Namespace:kube-system,Attempt:0,}" Sep 9 23:45:06.952435 systemd-networkd[1904]: calic677a29748c: Link UP Sep 9 23:45:06.955042 systemd-networkd[1904]: calic677a29748c: Gained carrier Sep 9 23:45:06.989043 containerd[2011]: 2025-09-09 23:45:06.766 [INFO][5010] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--64-k8s-coredns--668d6bf9bc--qkrg2-eth0 coredns-668d6bf9bc- kube-system 5bf39971-e401-40db-bcc6-2f920685cab2 848 0 2025-09-09 23:44:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-64 coredns-668d6bf9bc-qkrg2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic677a29748c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-qkrg2" WorkloadEndpoint="ip--172--31--18--64-k8s-coredns--668d6bf9bc--qkrg2-" Sep 9 23:45:06.989043 containerd[2011]: 2025-09-09 23:45:06.766 [INFO][5010] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-qkrg2" WorkloadEndpoint="ip--172--31--18--64-k8s-coredns--668d6bf9bc--qkrg2-eth0" Sep 9 23:45:06.989043 containerd[2011]: 2025-09-09 23:45:06.831 [INFO][5023] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" HandleID="k8s-pod-network.94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" Workload="ip--172--31--18--64-k8s-coredns--668d6bf9bc--qkrg2-eth0" Sep 9 23:45:06.990030 containerd[2011]: 2025-09-09 23:45:06.831 [INFO][5023] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" HandleID="k8s-pod-network.94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" Workload="ip--172--31--18--64-k8s-coredns--668d6bf9bc--qkrg2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb170), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-64", "pod":"coredns-668d6bf9bc-qkrg2", "timestamp":"2025-09-09 23:45:06.831128472 +0000 UTC"}, Hostname:"ip-172-31-18-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:45:06.990030 containerd[2011]: 2025-09-09 23:45:06.831 [INFO][5023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:45:06.990030 containerd[2011]: 2025-09-09 23:45:06.831 [INFO][5023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:45:06.990030 containerd[2011]: 2025-09-09 23:45:06.831 [INFO][5023] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-64' Sep 9 23:45:06.990030 containerd[2011]: 2025-09-09 23:45:06.869 [INFO][5023] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" host="ip-172-31-18-64" Sep 9 23:45:06.990030 containerd[2011]: 2025-09-09 23:45:06.882 [INFO][5023] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-64" Sep 9 23:45:06.990030 containerd[2011]: 2025-09-09 23:45:06.890 [INFO][5023] ipam/ipam.go 511: Trying affinity for 192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:06.990030 containerd[2011]: 2025-09-09 23:45:06.893 [INFO][5023] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:06.990030 containerd[2011]: 2025-09-09 23:45:06.898 [INFO][5023] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:06.990030 containerd[2011]: 2025-09-09 23:45:06.898 [INFO][5023] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.127.0/26 handle="k8s-pod-network.94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" host="ip-172-31-18-64" Sep 9 23:45:06.991664 containerd[2011]: 2025-09-09 23:45:06.902 [INFO][5023] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a Sep 9 23:45:06.991664 containerd[2011]: 2025-09-09 23:45:06.920 [INFO][5023] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.127.0/26 handle="k8s-pod-network.94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" host="ip-172-31-18-64" Sep 9 23:45:06.991664 containerd[2011]: 2025-09-09 23:45:06.940 [INFO][5023] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.127.3/26] block=192.168.127.0/26 handle="k8s-pod-network.94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" host="ip-172-31-18-64" Sep 9 23:45:06.991664 containerd[2011]: 2025-09-09 23:45:06.940 [INFO][5023] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.3/26] handle="k8s-pod-network.94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" host="ip-172-31-18-64" Sep 9 23:45:06.991664 containerd[2011]: 2025-09-09 23:45:06.940 [INFO][5023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:45:06.991664 containerd[2011]: 2025-09-09 23:45:06.940 [INFO][5023] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.127.3/26] IPv6=[] ContainerID="94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" HandleID="k8s-pod-network.94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" Workload="ip--172--31--18--64-k8s-coredns--668d6bf9bc--qkrg2-eth0" Sep 9 23:45:06.992174 containerd[2011]: 2025-09-09 23:45:06.946 [INFO][5010] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-qkrg2" WorkloadEndpoint="ip--172--31--18--64-k8s-coredns--668d6bf9bc--qkrg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--64-k8s-coredns--668d6bf9bc--qkrg2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5bf39971-e401-40db-bcc6-2f920685cab2", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-64", ContainerID:"", Pod:"coredns-668d6bf9bc-qkrg2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic677a29748c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:06.992174 containerd[2011]: 2025-09-09 23:45:06.946 [INFO][5010] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.3/32] ContainerID="94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-qkrg2" WorkloadEndpoint="ip--172--31--18--64-k8s-coredns--668d6bf9bc--qkrg2-eth0" Sep 9 23:45:06.992174 containerd[2011]: 2025-09-09 23:45:06.946 [INFO][5010] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic677a29748c ContainerID="94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-qkrg2" WorkloadEndpoint="ip--172--31--18--64-k8s-coredns--668d6bf9bc--qkrg2-eth0" Sep 9 23:45:06.992174 containerd[2011]: 2025-09-09 23:45:06.954 [INFO][5010] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-qkrg2" WorkloadEndpoint="ip--172--31--18--64-k8s-coredns--668d6bf9bc--qkrg2-eth0" Sep 9 23:45:06.992174 containerd[2011]: 2025-09-09 23:45:06.956 [INFO][5010] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-qkrg2" WorkloadEndpoint="ip--172--31--18--64-k8s-coredns--668d6bf9bc--qkrg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--64-k8s-coredns--668d6bf9bc--qkrg2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5bf39971-e401-40db-bcc6-2f920685cab2", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-64", ContainerID:"94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a", Pod:"coredns-668d6bf9bc-qkrg2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic677a29748c", MAC:"92:9c:d6:2e:cf:0c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:06.992174 containerd[2011]: 2025-09-09 23:45:06.982 [INFO][5010] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-qkrg2" WorkloadEndpoint="ip--172--31--18--64-k8s-coredns--668d6bf9bc--qkrg2-eth0" Sep 9 23:45:07.048269 containerd[2011]: time="2025-09-09T23:45:07.048199751Z" level=info msg="connecting to shim 94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a" address="unix:///run/containerd/s/810c81d098519e13396e7e51779bf095fbc539b61059f2d4ae2cbd12a4f84d35" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:07.123900 systemd[1]: Started cri-containerd-94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a.scope - libcontainer container 94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a. Sep 9 23:45:07.266822 containerd[2011]: time="2025-09-09T23:45:07.266760891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qkrg2,Uid:5bf39971-e401-40db-bcc6-2f920685cab2,Namespace:kube-system,Attempt:0,} returns sandbox id \"94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a\"" Sep 9 23:45:07.280072 containerd[2011]: time="2025-09-09T23:45:07.279798470Z" level=info msg="CreateContainer within sandbox \"94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:45:07.324950 containerd[2011]: time="2025-09-09T23:45:07.324327920Z" level=info msg="Container bff9454c448ab05bbe00800212bba5cbf0f551fa7a345f0d1cb2ac619deab6f3: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:07.332282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1259082260.mount: Deactivated successfully. Sep 9 23:45:07.344411 containerd[2011]: time="2025-09-09T23:45:07.344256115Z" level=info msg="CreateContainer within sandbox \"94be95a2dd0000a9d272e04c03df129175a856b65121967b13594448181ada0a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bff9454c448ab05bbe00800212bba5cbf0f551fa7a345f0d1cb2ac619deab6f3\"" Sep 9 23:45:07.346167 containerd[2011]: time="2025-09-09T23:45:07.346079623Z" level=info msg="StartContainer for \"bff9454c448ab05bbe00800212bba5cbf0f551fa7a345f0d1cb2ac619deab6f3\"" Sep 9 23:45:07.355411 containerd[2011]: time="2025-09-09T23:45:07.354364267Z" level=info msg="connecting to shim bff9454c448ab05bbe00800212bba5cbf0f551fa7a345f0d1cb2ac619deab6f3" address="unix:///run/containerd/s/810c81d098519e13396e7e51779bf095fbc539b61059f2d4ae2cbd12a4f84d35" protocol=ttrpc version=3 Sep 9 23:45:07.416725 systemd[1]: Started cri-containerd-bff9454c448ab05bbe00800212bba5cbf0f551fa7a345f0d1cb2ac619deab6f3.scope - libcontainer container bff9454c448ab05bbe00800212bba5cbf0f551fa7a345f0d1cb2ac619deab6f3. Sep 9 23:45:07.462710 systemd-networkd[1904]: cali828d05c88e8: Gained IPv6LL Sep 9 23:45:07.535778 containerd[2011]: time="2025-09-09T23:45:07.535545542Z" level=info msg="StartContainer for \"bff9454c448ab05bbe00800212bba5cbf0f551fa7a345f0d1cb2ac619deab6f3\" returns successfully" Sep 9 23:45:07.673305 containerd[2011]: time="2025-09-09T23:45:07.673227973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755779d75d-47hsp,Uid:61f33671-c702-4cca-aaa1-36fa26aa921f,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:45:07.674421 containerd[2011]: time="2025-09-09T23:45:07.674307457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b9586dc6d-6msnh,Uid:0fde2978-d9a0-45f5-a0c2-85155fe6d2c1,Namespace:calico-system,Attempt:0,}" Sep 9 23:45:08.168319 kubelet[3314]: I0909 23:45:08.168218 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qkrg2" podStartSLOduration=45.168194981 podStartE2EDuration="45.168194981s" podCreationTimestamp="2025-09-09 23:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:45:08.167096444 +0000 UTC m=+51.741915646" watchObservedRunningTime="2025-09-09 23:45:08.168194981 +0000 UTC m=+51.743014159" Sep 9 23:45:08.275693 systemd-networkd[1904]: cali8eb8a9c7c29: Link UP Sep 9 23:45:08.276198 systemd-networkd[1904]: cali8eb8a9c7c29: Gained carrier Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:07.905 [INFO][5131] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--64-k8s-calico--kube--controllers--5b9586dc6d--6msnh-eth0 calico-kube-controllers-5b9586dc6d- calico-system 0fde2978-d9a0-45f5-a0c2-85155fe6d2c1 847 0 2025-09-09 23:44:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b9586dc6d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-64 calico-kube-controllers-5b9586dc6d-6msnh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8eb8a9c7c29 [] [] }} ContainerID="e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" Namespace="calico-system" Pod="calico-kube-controllers-5b9586dc6d-6msnh" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--kube--controllers--5b9586dc6d--6msnh-" Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:07.905 [INFO][5131] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" Namespace="calico-system" Pod="calico-kube-controllers-5b9586dc6d-6msnh" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--kube--controllers--5b9586dc6d--6msnh-eth0" Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.049 [INFO][5149] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" HandleID="k8s-pod-network.e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" Workload="ip--172--31--18--64-k8s-calico--kube--controllers--5b9586dc6d--6msnh-eth0" Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.050 [INFO][5149] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" HandleID="k8s-pod-network.e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" Workload="ip--172--31--18--64-k8s-calico--kube--controllers--5b9586dc6d--6msnh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000333050), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-64", "pod":"calico-kube-controllers-5b9586dc6d-6msnh", "timestamp":"2025-09-09 23:45:08.049406175 +0000 UTC"}, Hostname:"ip-172-31-18-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.050 [INFO][5149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.051 [INFO][5149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.052 [INFO][5149] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-64' Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.089 [INFO][5149] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" host="ip-172-31-18-64" Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.124 [INFO][5149] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-64" Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.150 [INFO][5149] ipam/ipam.go 511: Trying affinity for 192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.160 [INFO][5149] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.187 [INFO][5149] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.189 [INFO][5149] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.127.0/26 handle="k8s-pod-network.e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" host="ip-172-31-18-64" Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.204 [INFO][5149] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.220 [INFO][5149] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.127.0/26 handle="k8s-pod-network.e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" host="ip-172-31-18-64" Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.250 [INFO][5149] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.127.4/26] block=192.168.127.0/26 handle="k8s-pod-network.e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" host="ip-172-31-18-64" Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.251 [INFO][5149] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.4/26] handle="k8s-pod-network.e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" host="ip-172-31-18-64" Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.251 [INFO][5149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:45:08.352088 containerd[2011]: 2025-09-09 23:45:08.253 [INFO][5149] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.127.4/26] IPv6=[] ContainerID="e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" HandleID="k8s-pod-network.e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" Workload="ip--172--31--18--64-k8s-calico--kube--controllers--5b9586dc6d--6msnh-eth0" Sep 9 23:45:08.355763 containerd[2011]: 2025-09-09 23:45:08.267 [INFO][5131] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" Namespace="calico-system" Pod="calico-kube-controllers-5b9586dc6d-6msnh" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--kube--controllers--5b9586dc6d--6msnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--64-k8s-calico--kube--controllers--5b9586dc6d--6msnh-eth0", GenerateName:"calico-kube-controllers-5b9586dc6d-", Namespace:"calico-system", SelfLink:"", UID:"0fde2978-d9a0-45f5-a0c2-85155fe6d2c1", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b9586dc6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-64", ContainerID:"", Pod:"calico-kube-controllers-5b9586dc6d-6msnh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8eb8a9c7c29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:08.355763 containerd[2011]: 2025-09-09 23:45:08.267 [INFO][5131] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.4/32] ContainerID="e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" Namespace="calico-system" Pod="calico-kube-controllers-5b9586dc6d-6msnh" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--kube--controllers--5b9586dc6d--6msnh-eth0" Sep 9 23:45:08.355763 containerd[2011]: 2025-09-09 23:45:08.267 [INFO][5131] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8eb8a9c7c29 ContainerID="e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" Namespace="calico-system" Pod="calico-kube-controllers-5b9586dc6d-6msnh" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--kube--controllers--5b9586dc6d--6msnh-eth0" Sep 9 23:45:08.355763 containerd[2011]: 2025-09-09 23:45:08.286 [INFO][5131] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" Namespace="calico-system" Pod="calico-kube-controllers-5b9586dc6d-6msnh" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--kube--controllers--5b9586dc6d--6msnh-eth0" Sep 9 23:45:08.355763 containerd[2011]: 2025-09-09 23:45:08.287 [INFO][5131] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" Namespace="calico-system" Pod="calico-kube-controllers-5b9586dc6d-6msnh" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--kube--controllers--5b9586dc6d--6msnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--64-k8s-calico--kube--controllers--5b9586dc6d--6msnh-eth0", GenerateName:"calico-kube-controllers-5b9586dc6d-", Namespace:"calico-system", SelfLink:"", UID:"0fde2978-d9a0-45f5-a0c2-85155fe6d2c1", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b9586dc6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-64", ContainerID:"e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe", Pod:"calico-kube-controllers-5b9586dc6d-6msnh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8eb8a9c7c29", MAC:"8a:a9:7d:b8:5c:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:08.355763 containerd[2011]: 2025-09-09 23:45:08.341 [INFO][5131] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" Namespace="calico-system" Pod="calico-kube-controllers-5b9586dc6d-6msnh" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--kube--controllers--5b9586dc6d--6msnh-eth0" Sep 9 23:45:08.358737 systemd-networkd[1904]: calic677a29748c: Gained IPv6LL Sep 9 23:45:08.484716 systemd-networkd[1904]: cali135cf5328e2: Link UP Sep 9 23:45:08.489983 systemd-networkd[1904]: cali135cf5328e2: Gained carrier Sep 9 23:45:08.507632 containerd[2011]: time="2025-09-09T23:45:08.507263501Z" level=info msg="connecting to shim e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe" address="unix:///run/containerd/s/9c02669e744eb8e8cb98913f7835054924592657ee4fc2b8352a010608e19eed" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:07.897 [INFO][5124] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--64-k8s-calico--apiserver--755779d75d--47hsp-eth0 calico-apiserver-755779d75d- calico-apiserver 61f33671-c702-4cca-aaa1-36fa26aa921f 845 0 2025-09-09 23:44:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:755779d75d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-64 calico-apiserver-755779d75d-47hsp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali135cf5328e2 [] [] }} ContainerID="f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" Namespace="calico-apiserver" Pod="calico-apiserver-755779d75d-47hsp" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--47hsp-" Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:07.898 [INFO][5124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" Namespace="calico-apiserver" Pod="calico-apiserver-755779d75d-47hsp" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--47hsp-eth0" Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.175 [INFO][5147] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" HandleID="k8s-pod-network.f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" Workload="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--47hsp-eth0" Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.177 [INFO][5147] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" HandleID="k8s-pod-network.f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" Workload="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--47hsp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001214f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-64", "pod":"calico-apiserver-755779d75d-47hsp", "timestamp":"2025-09-09 23:45:08.175331263 +0000 UTC"}, Hostname:"ip-172-31-18-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.177 [INFO][5147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.251 [INFO][5147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.252 [INFO][5147] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-64' Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.333 [INFO][5147] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" host="ip-172-31-18-64" Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.352 [INFO][5147] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-64" Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.368 [INFO][5147] ipam/ipam.go 511: Trying affinity for 192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.379 [INFO][5147] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.389 [INFO][5147] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.390 [INFO][5147] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.127.0/26 handle="k8s-pod-network.f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" host="ip-172-31-18-64" Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.396 [INFO][5147] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772 Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.412 [INFO][5147] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.127.0/26 handle="k8s-pod-network.f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" host="ip-172-31-18-64" Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.451 [INFO][5147] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.127.5/26] block=192.168.127.0/26 handle="k8s-pod-network.f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" host="ip-172-31-18-64" Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.454 [INFO][5147] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.5/26] handle="k8s-pod-network.f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" host="ip-172-31-18-64" Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.454 [INFO][5147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:45:08.577497 containerd[2011]: 2025-09-09 23:45:08.454 [INFO][5147] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.127.5/26] IPv6=[] ContainerID="f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" HandleID="k8s-pod-network.f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" Workload="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--47hsp-eth0" Sep 9 23:45:08.591246 containerd[2011]: 2025-09-09 23:45:08.477 [INFO][5124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" Namespace="calico-apiserver" Pod="calico-apiserver-755779d75d-47hsp" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--47hsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--64-k8s-calico--apiserver--755779d75d--47hsp-eth0", GenerateName:"calico-apiserver-755779d75d-", Namespace:"calico-apiserver", SelfLink:"", UID:"61f33671-c702-4cca-aaa1-36fa26aa921f", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"755779d75d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-64", ContainerID:"", Pod:"calico-apiserver-755779d75d-47hsp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali135cf5328e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:08.591246 containerd[2011]: 2025-09-09 23:45:08.477 [INFO][5124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.5/32] ContainerID="f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" Namespace="calico-apiserver" Pod="calico-apiserver-755779d75d-47hsp" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--47hsp-eth0" Sep 9 23:45:08.591246 containerd[2011]: 2025-09-09 23:45:08.477 [INFO][5124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali135cf5328e2 ContainerID="f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" Namespace="calico-apiserver" Pod="calico-apiserver-755779d75d-47hsp" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--47hsp-eth0" Sep 9 23:45:08.591246 containerd[2011]: 2025-09-09 23:45:08.494 [INFO][5124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" Namespace="calico-apiserver" Pod="calico-apiserver-755779d75d-47hsp" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--47hsp-eth0" Sep 9 23:45:08.591246 containerd[2011]: 2025-09-09 23:45:08.503 [INFO][5124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" Namespace="calico-apiserver" Pod="calico-apiserver-755779d75d-47hsp" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--47hsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--64-k8s-calico--apiserver--755779d75d--47hsp-eth0", GenerateName:"calico-apiserver-755779d75d-", Namespace:"calico-apiserver", SelfLink:"", UID:"61f33671-c702-4cca-aaa1-36fa26aa921f", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"755779d75d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-64", ContainerID:"f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772", Pod:"calico-apiserver-755779d75d-47hsp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali135cf5328e2", MAC:"0a:97:1d:d0:58:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:08.591246 containerd[2011]: 2025-09-09 23:45:08.547 [INFO][5124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" Namespace="calico-apiserver" Pod="calico-apiserver-755779d75d-47hsp" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--47hsp-eth0" Sep 9 23:45:08.697196 containerd[2011]: time="2025-09-09T23:45:08.697145427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755779d75d-f8ptg,Uid:d827e045-efbc-4ba3-baf6-36db46b00e7b,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:45:08.739314 containerd[2011]: time="2025-09-09T23:45:08.738050494Z" level=info msg="connecting to shim f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772" address="unix:///run/containerd/s/e3e57750e739ff9720c31818c7a5f36ae5c75f2a86091f9daa7922e9e351bf20" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:08.752743 systemd[1]: Started cri-containerd-e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe.scope - libcontainer container e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe. Sep 9 23:45:08.884655 systemd[1]: Started cri-containerd-f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772.scope - libcontainer container f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772. Sep 9 23:45:09.001651 containerd[2011]: time="2025-09-09T23:45:09.001415540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b9586dc6d-6msnh,Uid:0fde2978-d9a0-45f5-a0c2-85155fe6d2c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe\"" Sep 9 23:45:09.245209 containerd[2011]: time="2025-09-09T23:45:09.245087601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755779d75d-47hsp,Uid:61f33671-c702-4cca-aaa1-36fa26aa921f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772\"" Sep 9 23:45:09.382841 systemd-networkd[1904]: cali691d5e74843: Link UP Sep 9 23:45:09.384949 systemd-networkd[1904]: cali691d5e74843: Gained carrier Sep 9 23:45:09.399255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529823327.mount: Deactivated successfully. Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:08.992 [INFO][5224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--64-k8s-calico--apiserver--755779d75d--f8ptg-eth0 calico-apiserver-755779d75d- calico-apiserver d827e045-efbc-4ba3-baf6-36db46b00e7b 852 0 2025-09-09 23:44:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:755779d75d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-64 calico-apiserver-755779d75d-f8ptg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali691d5e74843 [] [] }} ContainerID="a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" Namespace="calico-apiserver" Pod="calico-apiserver-755779d75d-f8ptg" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--f8ptg-" Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:08.993 [INFO][5224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" Namespace="calico-apiserver" Pod="calico-apiserver-755779d75d-f8ptg" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--f8ptg-eth0" Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.145 [INFO][5282] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" HandleID="k8s-pod-network.a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" Workload="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--f8ptg-eth0" Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.148 [INFO][5282] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" HandleID="k8s-pod-network.a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" Workload="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--f8ptg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-64", "pod":"calico-apiserver-755779d75d-f8ptg", "timestamp":"2025-09-09 23:45:09.145790489 +0000 UTC"}, Hostname:"ip-172-31-18-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.148 [INFO][5282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.148 [INFO][5282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.148 [INFO][5282] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-64' Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.201 [INFO][5282] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" host="ip-172-31-18-64" Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.221 [INFO][5282] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-64" Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.259 [INFO][5282] ipam/ipam.go 511: Trying affinity for 192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.267 [INFO][5282] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.279 [INFO][5282] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.279 [INFO][5282] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.127.0/26 handle="k8s-pod-network.a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" host="ip-172-31-18-64" Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.287 [INFO][5282] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831 Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.319 [INFO][5282] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.127.0/26 handle="k8s-pod-network.a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" host="ip-172-31-18-64" Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.347 [INFO][5282] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.127.6/26] block=192.168.127.0/26 handle="k8s-pod-network.a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" host="ip-172-31-18-64" Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.347 [INFO][5282] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.6/26] handle="k8s-pod-network.a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" host="ip-172-31-18-64" Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.347 [INFO][5282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:45:09.429267 containerd[2011]: 2025-09-09 23:45:09.347 [INFO][5282] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.127.6/26] IPv6=[] ContainerID="a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" HandleID="k8s-pod-network.a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" Workload="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--f8ptg-eth0" Sep 9 23:45:09.431798 containerd[2011]: 2025-09-09 23:45:09.361 [INFO][5224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" Namespace="calico-apiserver" Pod="calico-apiserver-755779d75d-f8ptg" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--f8ptg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--64-k8s-calico--apiserver--755779d75d--f8ptg-eth0", GenerateName:"calico-apiserver-755779d75d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d827e045-efbc-4ba3-baf6-36db46b00e7b", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"755779d75d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-64", ContainerID:"", Pod:"calico-apiserver-755779d75d-f8ptg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali691d5e74843", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:09.431798 containerd[2011]: 2025-09-09 23:45:09.361 [INFO][5224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.6/32] ContainerID="a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" Namespace="calico-apiserver" Pod="calico-apiserver-755779d75d-f8ptg" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--f8ptg-eth0" Sep 9 23:45:09.431798 containerd[2011]: 2025-09-09 23:45:09.361 [INFO][5224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali691d5e74843 ContainerID="a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" Namespace="calico-apiserver" Pod="calico-apiserver-755779d75d-f8ptg" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--f8ptg-eth0" Sep 9 23:45:09.431798 containerd[2011]: 2025-09-09 23:45:09.379 [INFO][5224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" Namespace="calico-apiserver" Pod="calico-apiserver-755779d75d-f8ptg" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--f8ptg-eth0" Sep 9 23:45:09.431798 containerd[2011]: 2025-09-09 23:45:09.389 [INFO][5224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" Namespace="calico-apiserver" Pod="calico-apiserver-755779d75d-f8ptg" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--f8ptg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--64-k8s-calico--apiserver--755779d75d--f8ptg-eth0", GenerateName:"calico-apiserver-755779d75d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d827e045-efbc-4ba3-baf6-36db46b00e7b", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"755779d75d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-64", ContainerID:"a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831", Pod:"calico-apiserver-755779d75d-f8ptg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali691d5e74843", MAC:"be:14:b2:2e:e6:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:09.431798 containerd[2011]: 2025-09-09 23:45:09.421 [INFO][5224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" Namespace="calico-apiserver" Pod="calico-apiserver-755779d75d-f8ptg" WorkloadEndpoint="ip--172--31--18--64-k8s-calico--apiserver--755779d75d--f8ptg-eth0" Sep 9 23:45:09.463411 containerd[2011]: time="2025-09-09T23:45:09.463014932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:09.471770 containerd[2011]: time="2025-09-09T23:45:09.471718718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 9 23:45:09.481976 containerd[2011]: time="2025-09-09T23:45:09.481724026Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:09.499255 containerd[2011]: time="2025-09-09T23:45:09.497618325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:09.504290 containerd[2011]: time="2025-09-09T23:45:09.504212621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 4.251963865s" Sep 9 23:45:09.504290 containerd[2011]: time="2025-09-09T23:45:09.504278018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 9 23:45:09.517148 containerd[2011]: time="2025-09-09T23:45:09.511329898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 23:45:09.525214 containerd[2011]: time="2025-09-09T23:45:09.525066875Z" level=info msg="CreateContainer within sandbox \"f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 23:45:09.527781 containerd[2011]: time="2025-09-09T23:45:09.525559661Z" level=info msg="connecting to shim a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831" address="unix:///run/containerd/s/0d6b942620ea319137778b88159943f1cdaa7b98979e0a4a0afa004437bbf7db" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:09.569237 containerd[2011]: time="2025-09-09T23:45:09.568428501Z" level=info msg="Container 0245afebec6b18ee77a64df4e71331a065c0cbbffcb7c5cabf030df63c70c5a6: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:09.643929 systemd[1]: Started cri-containerd-a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831.scope - libcontainer container a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831. Sep 9 23:45:09.647126 containerd[2011]: time="2025-09-09T23:45:09.647001865Z" level=info msg="CreateContainer within sandbox \"f0fc891c292a97b6846f8cb53b8ddef6270b6b0f1c0b37cc516af948313c9bb7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0245afebec6b18ee77a64df4e71331a065c0cbbffcb7c5cabf030df63c70c5a6\"" Sep 9 23:45:09.651113 containerd[2011]: time="2025-09-09T23:45:09.650926530Z" level=info msg="StartContainer for \"0245afebec6b18ee77a64df4e71331a065c0cbbffcb7c5cabf030df63c70c5a6\"" Sep 9 23:45:09.662963 containerd[2011]: time="2025-09-09T23:45:09.662494287Z" level=info msg="connecting to shim 0245afebec6b18ee77a64df4e71331a065c0cbbffcb7c5cabf030df63c70c5a6" address="unix:///run/containerd/s/84ea5ae924643e068a48bd2c4b4bfba2d40ff631a27d49426b2436a468a78dbd" protocol=ttrpc version=3 Sep 9 23:45:09.670821 containerd[2011]: time="2025-09-09T23:45:09.670758185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zhnjv,Uid:c8c73f8a-ee9c-4c93-8ca5-690098513149,Namespace:kube-system,Attempt:0,}" Sep 9 23:45:09.714535 systemd[1]: Started cri-containerd-0245afebec6b18ee77a64df4e71331a065c0cbbffcb7c5cabf030df63c70c5a6.scope - libcontainer container 0245afebec6b18ee77a64df4e71331a065c0cbbffcb7c5cabf030df63c70c5a6. Sep 9 23:45:09.876401 containerd[2011]: time="2025-09-09T23:45:09.876316310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755779d75d-f8ptg,Uid:d827e045-efbc-4ba3-baf6-36db46b00e7b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831\"" Sep 9 23:45:10.024674 systemd-networkd[1904]: cali8eb8a9c7c29: Gained IPv6LL Sep 9 23:45:10.047987 systemd-networkd[1904]: cali2aafd5b5e5e: Link UP Sep 9 23:45:10.050350 containerd[2011]: time="2025-09-09T23:45:10.049549121Z" level=info msg="StartContainer for \"0245afebec6b18ee77a64df4e71331a065c0cbbffcb7c5cabf030df63c70c5a6\" returns successfully" Sep 9 23:45:10.051820 systemd-networkd[1904]: cali2aafd5b5e5e: Gained carrier Sep 9 23:45:10.084649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2814666201.mount: Deactivated successfully. Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:09.809 [INFO][5360] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--64-k8s-coredns--668d6bf9bc--zhnjv-eth0 coredns-668d6bf9bc- kube-system c8c73f8a-ee9c-4c93-8ca5-690098513149 851 0 2025-09-09 23:44:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-64 coredns-668d6bf9bc-zhnjv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2aafd5b5e5e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" Namespace="kube-system" Pod="coredns-668d6bf9bc-zhnjv" WorkloadEndpoint="ip--172--31--18--64-k8s-coredns--668d6bf9bc--zhnjv-" Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:09.810 [INFO][5360] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" Namespace="kube-system" Pod="coredns-668d6bf9bc-zhnjv" WorkloadEndpoint="ip--172--31--18--64-k8s-coredns--668d6bf9bc--zhnjv-eth0" Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:09.910 [INFO][5381] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" HandleID="k8s-pod-network.cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" Workload="ip--172--31--18--64-k8s-coredns--668d6bf9bc--zhnjv-eth0" Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:09.910 [INFO][5381] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" HandleID="k8s-pod-network.cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" Workload="ip--172--31--18--64-k8s-coredns--668d6bf9bc--zhnjv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000337e20), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-64", "pod":"coredns-668d6bf9bc-zhnjv", "timestamp":"2025-09-09 23:45:09.910496612 +0000 UTC"}, Hostname:"ip-172-31-18-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:09.910 [INFO][5381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:09.910 [INFO][5381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:09.910 [INFO][5381] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-64' Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:09.928 [INFO][5381] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" host="ip-172-31-18-64" Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:09.941 [INFO][5381] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-64" Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:09.953 [INFO][5381] ipam/ipam.go 511: Trying affinity for 192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:09.958 [INFO][5381] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:09.975 [INFO][5381] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:09.976 [INFO][5381] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.127.0/26 handle="k8s-pod-network.cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" host="ip-172-31-18-64" Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:09.986 [INFO][5381] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:09.996 [INFO][5381] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.127.0/26 handle="k8s-pod-network.cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" host="ip-172-31-18-64" Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:10.028 [INFO][5381] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.127.7/26] block=192.168.127.0/26 handle="k8s-pod-network.cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" host="ip-172-31-18-64" Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:10.030 [INFO][5381] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.7/26] handle="k8s-pod-network.cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" host="ip-172-31-18-64" Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:10.030 [INFO][5381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:45:10.114657 containerd[2011]: 2025-09-09 23:45:10.030 [INFO][5381] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.127.7/26] IPv6=[] ContainerID="cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" HandleID="k8s-pod-network.cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" Workload="ip--172--31--18--64-k8s-coredns--668d6bf9bc--zhnjv-eth0" Sep 9 23:45:10.115954 containerd[2011]: 2025-09-09 23:45:10.041 [INFO][5360] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" Namespace="kube-system" Pod="coredns-668d6bf9bc-zhnjv" WorkloadEndpoint="ip--172--31--18--64-k8s-coredns--668d6bf9bc--zhnjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--64-k8s-coredns--668d6bf9bc--zhnjv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c8c73f8a-ee9c-4c93-8ca5-690098513149", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-64", ContainerID:"", Pod:"coredns-668d6bf9bc-zhnjv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2aafd5b5e5e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:10.115954 containerd[2011]: 2025-09-09 23:45:10.042 [INFO][5360] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.7/32] ContainerID="cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" Namespace="kube-system" Pod="coredns-668d6bf9bc-zhnjv" WorkloadEndpoint="ip--172--31--18--64-k8s-coredns--668d6bf9bc--zhnjv-eth0" Sep 9 23:45:10.115954 containerd[2011]: 2025-09-09 23:45:10.042 [INFO][5360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2aafd5b5e5e ContainerID="cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" Namespace="kube-system" Pod="coredns-668d6bf9bc-zhnjv" WorkloadEndpoint="ip--172--31--18--64-k8s-coredns--668d6bf9bc--zhnjv-eth0" Sep 9 23:45:10.115954 containerd[2011]: 2025-09-09 23:45:10.058 [INFO][5360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" Namespace="kube-system" Pod="coredns-668d6bf9bc-zhnjv" WorkloadEndpoint="ip--172--31--18--64-k8s-coredns--668d6bf9bc--zhnjv-eth0" Sep 9 23:45:10.115954 containerd[2011]: 2025-09-09 23:45:10.059 [INFO][5360] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" Namespace="kube-system" Pod="coredns-668d6bf9bc-zhnjv" WorkloadEndpoint="ip--172--31--18--64-k8s-coredns--668d6bf9bc--zhnjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--64-k8s-coredns--668d6bf9bc--zhnjv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c8c73f8a-ee9c-4c93-8ca5-690098513149", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-64", ContainerID:"cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c", Pod:"coredns-668d6bf9bc-zhnjv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2aafd5b5e5e", MAC:"ca:8d:16:ad:cc:fc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:10.115954 containerd[2011]: 2025-09-09 23:45:10.106 [INFO][5360] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" Namespace="kube-system" Pod="coredns-668d6bf9bc-zhnjv" WorkloadEndpoint="ip--172--31--18--64-k8s-coredns--668d6bf9bc--zhnjv-eth0" Sep 9 23:45:10.183955 containerd[2011]: time="2025-09-09T23:45:10.183835151Z" level=info msg="connecting to shim cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c" address="unix:///run/containerd/s/a1b958c0a72c78cb1a103e3a23741475e059cc51a89fe8962390322c0b91f36b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:10.258667 systemd[1]: Started cri-containerd-cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c.scope - libcontainer container cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c. Sep 9 23:45:10.395056 containerd[2011]: time="2025-09-09T23:45:10.394934129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zhnjv,Uid:c8c73f8a-ee9c-4c93-8ca5-690098513149,Namespace:kube-system,Attempt:0,} returns sandbox id \"cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c\"" Sep 9 23:45:10.402782 containerd[2011]: time="2025-09-09T23:45:10.402713441Z" level=info msg="CreateContainer within sandbox \"cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:45:10.434127 containerd[2011]: time="2025-09-09T23:45:10.434047853Z" level=info msg="Container 52b3936d7ff6775c94d2f7a13f5705f8dbb4ffa928ff82e55307cc0a62a10314: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:10.445522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2972230087.mount: Deactivated successfully. Sep 9 23:45:10.457953 containerd[2011]: time="2025-09-09T23:45:10.457233288Z" level=info msg="CreateContainer within sandbox \"cadef76196035a16a22534d4f195697d98d8714893ac1e64b03c6ef33d7d312c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"52b3936d7ff6775c94d2f7a13f5705f8dbb4ffa928ff82e55307cc0a62a10314\"" Sep 9 23:45:10.460719 containerd[2011]: time="2025-09-09T23:45:10.460638477Z" level=info msg="StartContainer for \"52b3936d7ff6775c94d2f7a13f5705f8dbb4ffa928ff82e55307cc0a62a10314\"" Sep 9 23:45:10.467241 containerd[2011]: time="2025-09-09T23:45:10.467160017Z" level=info msg="connecting to shim 52b3936d7ff6775c94d2f7a13f5705f8dbb4ffa928ff82e55307cc0a62a10314" address="unix:///run/containerd/s/a1b958c0a72c78cb1a103e3a23741475e059cc51a89fe8962390322c0b91f36b" protocol=ttrpc version=3 Sep 9 23:45:10.525684 systemd[1]: Started cri-containerd-52b3936d7ff6775c94d2f7a13f5705f8dbb4ffa928ff82e55307cc0a62a10314.scope - libcontainer container 52b3936d7ff6775c94d2f7a13f5705f8dbb4ffa928ff82e55307cc0a62a10314. Sep 9 23:45:10.536057 systemd-networkd[1904]: cali135cf5328e2: Gained IPv6LL Sep 9 23:45:10.605263 containerd[2011]: time="2025-09-09T23:45:10.605205030Z" level=info msg="StartContainer for \"52b3936d7ff6775c94d2f7a13f5705f8dbb4ffa928ff82e55307cc0a62a10314\" returns successfully" Sep 9 23:45:10.680716 containerd[2011]: time="2025-09-09T23:45:10.680474582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-p985n,Uid:89cd5784-8525-40b2-b9f2-e26328eb1dea,Namespace:calico-system,Attempt:0,}" Sep 9 23:45:10.792096 systemd-networkd[1904]: cali691d5e74843: Gained IPv6LL Sep 9 23:45:10.904302 systemd-networkd[1904]: cali2de60977dc8: Link UP Sep 9 23:45:10.906229 systemd-networkd[1904]: cali2de60977dc8: Gained carrier Sep 9 23:45:10.933747 kubelet[3314]: I0909 23:45:10.933065 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-76d47f499b-z5qps" podStartSLOduration=2.490752664 podStartE2EDuration="8.933041504s" podCreationTimestamp="2025-09-09 23:45:02 +0000 UTC" firstStartedPulling="2025-09-09 23:45:03.067612416 +0000 UTC m=+46.642431594" lastFinishedPulling="2025-09-09 23:45:09.509901256 +0000 UTC m=+53.084720434" observedRunningTime="2025-09-09 23:45:10.182499856 +0000 UTC m=+53.757319118" watchObservedRunningTime="2025-09-09 23:45:10.933041504 +0000 UTC m=+54.507860682" Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.760 [INFO][5506] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--64-k8s-goldmane--54d579b49d--p985n-eth0 goldmane-54d579b49d- calico-system 89cd5784-8525-40b2-b9f2-e26328eb1dea 849 0 2025-09-09 23:44:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-18-64 goldmane-54d579b49d-p985n eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2de60977dc8 [] [] }} ContainerID="86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" Namespace="calico-system" Pod="goldmane-54d579b49d-p985n" WorkloadEndpoint="ip--172--31--18--64-k8s-goldmane--54d579b49d--p985n-" Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.760 [INFO][5506] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" Namespace="calico-system" Pod="goldmane-54d579b49d-p985n" WorkloadEndpoint="ip--172--31--18--64-k8s-goldmane--54d579b49d--p985n-eth0" Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.822 [INFO][5518] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" HandleID="k8s-pod-network.86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" Workload="ip--172--31--18--64-k8s-goldmane--54d579b49d--p985n-eth0" Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.822 [INFO][5518] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" HandleID="k8s-pod-network.86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" Workload="ip--172--31--18--64-k8s-goldmane--54d579b49d--p985n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dd50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-64", "pod":"goldmane-54d579b49d-p985n", "timestamp":"2025-09-09 23:45:10.82217481 +0000 UTC"}, Hostname:"ip-172-31-18-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.822 [INFO][5518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.822 [INFO][5518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.822 [INFO][5518] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-64' Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.842 [INFO][5518] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" host="ip-172-31-18-64" Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.850 [INFO][5518] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-64" Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.859 [INFO][5518] ipam/ipam.go 511: Trying affinity for 192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.862 [INFO][5518] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.867 [INFO][5518] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.0/26 host="ip-172-31-18-64" Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.868 [INFO][5518] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.127.0/26 handle="k8s-pod-network.86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" host="ip-172-31-18-64" Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.870 [INFO][5518] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.879 [INFO][5518] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.127.0/26 handle="k8s-pod-network.86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" host="ip-172-31-18-64" Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.893 [INFO][5518] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.127.8/26] block=192.168.127.0/26 handle="k8s-pod-network.86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" host="ip-172-31-18-64" Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.893 [INFO][5518] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.8/26] handle="k8s-pod-network.86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" host="ip-172-31-18-64" Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.893 [INFO][5518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:45:10.937605 containerd[2011]: 2025-09-09 23:45:10.893 [INFO][5518] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.127.8/26] IPv6=[] ContainerID="86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" HandleID="k8s-pod-network.86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" Workload="ip--172--31--18--64-k8s-goldmane--54d579b49d--p985n-eth0" Sep 9 23:45:10.939709 containerd[2011]: 2025-09-09 23:45:10.897 [INFO][5506] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" Namespace="calico-system" Pod="goldmane-54d579b49d-p985n" WorkloadEndpoint="ip--172--31--18--64-k8s-goldmane--54d579b49d--p985n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--64-k8s-goldmane--54d579b49d--p985n-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"89cd5784-8525-40b2-b9f2-e26328eb1dea", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-64", ContainerID:"", Pod:"goldmane-54d579b49d-p985n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2de60977dc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:10.939709 containerd[2011]: 2025-09-09 23:45:10.897 [INFO][5506] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.8/32] ContainerID="86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" Namespace="calico-system" Pod="goldmane-54d579b49d-p985n" WorkloadEndpoint="ip--172--31--18--64-k8s-goldmane--54d579b49d--p985n-eth0" Sep 9 23:45:10.939709 containerd[2011]: 2025-09-09 23:45:10.897 [INFO][5506] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2de60977dc8 ContainerID="86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" Namespace="calico-system" Pod="goldmane-54d579b49d-p985n" WorkloadEndpoint="ip--172--31--18--64-k8s-goldmane--54d579b49d--p985n-eth0" Sep 9 23:45:10.939709 containerd[2011]: 2025-09-09 23:45:10.908 [INFO][5506] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" Namespace="calico-system" Pod="goldmane-54d579b49d-p985n" WorkloadEndpoint="ip--172--31--18--64-k8s-goldmane--54d579b49d--p985n-eth0" Sep 9 23:45:10.939709 containerd[2011]: 2025-09-09 23:45:10.910 [INFO][5506] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" Namespace="calico-system" Pod="goldmane-54d579b49d-p985n" WorkloadEndpoint="ip--172--31--18--64-k8s-goldmane--54d579b49d--p985n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--64-k8s-goldmane--54d579b49d--p985n-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"89cd5784-8525-40b2-b9f2-e26328eb1dea", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-64", ContainerID:"86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b", Pod:"goldmane-54d579b49d-p985n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2de60977dc8", MAC:"c6:4f:df:dd:16:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:10.939709 containerd[2011]: 2025-09-09 23:45:10.930 [INFO][5506] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" Namespace="calico-system" Pod="goldmane-54d579b49d-p985n" WorkloadEndpoint="ip--172--31--18--64-k8s-goldmane--54d579b49d--p985n-eth0" Sep 9 23:45:10.987457 containerd[2011]: time="2025-09-09T23:45:10.986673375Z" level=info msg="connecting to shim 86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b" address="unix:///run/containerd/s/7867237a55d41ebe23a23819456d76c9a160566af1053ba4d2dc0f3a868fc10f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:11.046242 systemd[1]: Started cri-containerd-86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b.scope - libcontainer container 86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b. Sep 9 23:45:11.135263 containerd[2011]: time="2025-09-09T23:45:11.135179288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-p985n,Uid:89cd5784-8525-40b2-b9f2-e26328eb1dea,Namespace:calico-system,Attempt:0,} returns sandbox id \"86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b\"" Sep 9 23:45:11.171414 kubelet[3314]: I0909 23:45:11.169934 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zhnjv" podStartSLOduration=48.169908828 podStartE2EDuration="48.169908828s" podCreationTimestamp="2025-09-09 23:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:45:11.167243184 +0000 UTC m=+54.742062386" watchObservedRunningTime="2025-09-09 23:45:11.169908828 +0000 UTC m=+54.744728006" Sep 9 23:45:11.495700 systemd-networkd[1904]: cali2aafd5b5e5e: Gained IPv6LL Sep 9 23:45:12.080806 containerd[2011]: time="2025-09-09T23:45:12.079593442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:12.081558 containerd[2011]: time="2025-09-09T23:45:12.081514294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 9 23:45:12.084093 containerd[2011]: time="2025-09-09T23:45:12.084000676Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:12.089774 containerd[2011]: time="2025-09-09T23:45:12.089661373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:12.091526 containerd[2011]: time="2025-09-09T23:45:12.091454517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 2.580031597s" Sep 9 23:45:12.091526 containerd[2011]: time="2025-09-09T23:45:12.091519362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 9 23:45:12.094331 containerd[2011]: time="2025-09-09T23:45:12.093962403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 23:45:12.097038 containerd[2011]: time="2025-09-09T23:45:12.096037376Z" level=info msg="CreateContainer within sandbox \"c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 23:45:12.117866 containerd[2011]: time="2025-09-09T23:45:12.117810977Z" level=info msg="Container beed0a05e6605a6bf61284b39638fd1a95ee0ad86e150c0f5fe22e550fc773b2: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:12.132116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3574749384.mount: Deactivated successfully. Sep 9 23:45:12.145742 containerd[2011]: time="2025-09-09T23:45:12.145662124Z" level=info msg="CreateContainer within sandbox \"c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"beed0a05e6605a6bf61284b39638fd1a95ee0ad86e150c0f5fe22e550fc773b2\"" Sep 9 23:45:12.148467 containerd[2011]: time="2025-09-09T23:45:12.146757780Z" level=info msg="StartContainer for \"beed0a05e6605a6bf61284b39638fd1a95ee0ad86e150c0f5fe22e550fc773b2\"" Sep 9 23:45:12.149760 containerd[2011]: time="2025-09-09T23:45:12.149695276Z" level=info msg="connecting to shim beed0a05e6605a6bf61284b39638fd1a95ee0ad86e150c0f5fe22e550fc773b2" address="unix:///run/containerd/s/7fdac8837c89fcd63aaee2ff71f809f46e7d7ded4aaf1e3d1feef0c8daeebc15" protocol=ttrpc version=3 Sep 9 23:45:12.201728 systemd[1]: Started cri-containerd-beed0a05e6605a6bf61284b39638fd1a95ee0ad86e150c0f5fe22e550fc773b2.scope - libcontainer container beed0a05e6605a6bf61284b39638fd1a95ee0ad86e150c0f5fe22e550fc773b2. Sep 9 23:45:12.299426 containerd[2011]: time="2025-09-09T23:45:12.297479976Z" level=info msg="StartContainer for \"beed0a05e6605a6bf61284b39638fd1a95ee0ad86e150c0f5fe22e550fc773b2\" returns successfully" Sep 9 23:45:12.711298 systemd-networkd[1904]: cali2de60977dc8: Gained IPv6LL Sep 9 23:45:14.719681 ntpd[1973]: Listen normally on 8 vxlan.calico 192.168.127.0:123 Sep 9 23:45:14.719814 ntpd[1973]: Listen normally on 9 cali12d61d3f8ea [fe80::ecee:eeff:feee:eeee%4]:123 Sep 9 23:45:14.720351 ntpd[1973]: 9 Sep 23:45:14 ntpd[1973]: Listen normally on 8 vxlan.calico 192.168.127.0:123 Sep 9 23:45:14.720351 ntpd[1973]: 9 Sep 23:45:14 ntpd[1973]: Listen normally on 9 cali12d61d3f8ea [fe80::ecee:eeff:feee:eeee%4]:123 Sep 9 23:45:14.720351 ntpd[1973]: 9 Sep 23:45:14 ntpd[1973]: Listen normally on 10 vxlan.calico [fe80::6412:91ff:fec3:af9c%5]:123 Sep 9 23:45:14.720351 ntpd[1973]: 9 Sep 23:45:14 ntpd[1973]: Listen normally on 11 cali828d05c88e8 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 9 23:45:14.720351 ntpd[1973]: 9 Sep 23:45:14 ntpd[1973]: Listen normally on 12 calic677a29748c [fe80::ecee:eeff:feee:eeee%9]:123 Sep 9 23:45:14.720351 ntpd[1973]: 9 Sep 23:45:14 ntpd[1973]: Listen normally on 13 cali8eb8a9c7c29 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 9 23:45:14.720351 ntpd[1973]: 9 Sep 23:45:14 ntpd[1973]: Listen normally on 14 cali135cf5328e2 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 9 23:45:14.720351 ntpd[1973]: 9 Sep 23:45:14 ntpd[1973]: Listen normally on 15 cali691d5e74843 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 9 23:45:14.720351 ntpd[1973]: 9 Sep 23:45:14 ntpd[1973]: Listen normally on 16 cali2aafd5b5e5e [fe80::ecee:eeff:feee:eeee%13]:123 Sep 9 23:45:14.720351 ntpd[1973]: 9 Sep 23:45:14 ntpd[1973]: Listen normally on 17 cali2de60977dc8 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 9 23:45:14.719905 ntpd[1973]: Listen normally on 10 vxlan.calico [fe80::6412:91ff:fec3:af9c%5]:123 Sep 9 23:45:14.719970 ntpd[1973]: Listen normally on 11 cali828d05c88e8 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 9 23:45:14.720032 ntpd[1973]: Listen normally on 12 calic677a29748c [fe80::ecee:eeff:feee:eeee%9]:123 Sep 9 23:45:14.720095 ntpd[1973]: Listen normally on 13 cali8eb8a9c7c29 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 9 23:45:14.720156 ntpd[1973]: Listen normally on 14 cali135cf5328e2 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 9 23:45:14.720218 ntpd[1973]: Listen normally on 15 cali691d5e74843 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 9 23:45:14.720279 ntpd[1973]: Listen normally on 16 cali2aafd5b5e5e [fe80::ecee:eeff:feee:eeee%13]:123 Sep 9 23:45:14.720341 ntpd[1973]: Listen normally on 17 cali2de60977dc8 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 9 23:45:17.383659 containerd[2011]: time="2025-09-09T23:45:17.382136797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:17.387037 containerd[2011]: time="2025-09-09T23:45:17.386811718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 9 23:45:17.390432 containerd[2011]: time="2025-09-09T23:45:17.390321143Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:17.399517 containerd[2011]: time="2025-09-09T23:45:17.399395158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:17.403481 containerd[2011]: time="2025-09-09T23:45:17.403407695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 5.309377783s" Sep 9 23:45:17.403481 containerd[2011]: time="2025-09-09T23:45:17.403479299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 9 23:45:17.406745 containerd[2011]: time="2025-09-09T23:45:17.406687675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 23:45:17.445466 containerd[2011]: time="2025-09-09T23:45:17.445419691Z" level=info msg="CreateContainer within sandbox \"e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 23:45:17.464313 containerd[2011]: time="2025-09-09T23:45:17.462806949Z" level=info msg="Container ad2bdeccbd8a00ed32a985fd2b96640eb9867abf670a2b57f495be62015f4f2e: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:17.484157 containerd[2011]: time="2025-09-09T23:45:17.484080860Z" level=info msg="CreateContainer within sandbox \"e9ae4ca1e7812d3f3c1318d67f1030c7ac2a545267df6a4a5ad9d75276941fbe\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ad2bdeccbd8a00ed32a985fd2b96640eb9867abf670a2b57f495be62015f4f2e\"" Sep 9 23:45:17.487310 containerd[2011]: time="2025-09-09T23:45:17.487137840Z" level=info msg="StartContainer for \"ad2bdeccbd8a00ed32a985fd2b96640eb9867abf670a2b57f495be62015f4f2e\"" Sep 9 23:45:17.490494 containerd[2011]: time="2025-09-09T23:45:17.490406606Z" level=info msg="connecting to shim ad2bdeccbd8a00ed32a985fd2b96640eb9867abf670a2b57f495be62015f4f2e" address="unix:///run/containerd/s/9c02669e744eb8e8cb98913f7835054924592657ee4fc2b8352a010608e19eed" protocol=ttrpc version=3 Sep 9 23:45:17.573744 systemd[1]: Started cri-containerd-ad2bdeccbd8a00ed32a985fd2b96640eb9867abf670a2b57f495be62015f4f2e.scope - libcontainer container ad2bdeccbd8a00ed32a985fd2b96640eb9867abf670a2b57f495be62015f4f2e. Sep 9 23:45:17.674936 containerd[2011]: time="2025-09-09T23:45:17.674608962Z" level=info msg="StartContainer for \"ad2bdeccbd8a00ed32a985fd2b96640eb9867abf670a2b57f495be62015f4f2e\" returns successfully" Sep 9 23:45:18.234768 kubelet[3314]: I0909 23:45:18.233780 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b9586dc6d-6msnh" podStartSLOduration=25.841361153 podStartE2EDuration="34.233758893s" podCreationTimestamp="2025-09-09 23:44:44 +0000 UTC" firstStartedPulling="2025-09-09 23:45:09.01258764 +0000 UTC m=+52.587406818" lastFinishedPulling="2025-09-09 23:45:17.404985392 +0000 UTC m=+60.979804558" observedRunningTime="2025-09-09 23:45:18.228632103 +0000 UTC m=+61.803451353" watchObservedRunningTime="2025-09-09 23:45:18.233758893 +0000 UTC m=+61.808578071" Sep 9 23:45:18.286022 containerd[2011]: time="2025-09-09T23:45:18.285972628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad2bdeccbd8a00ed32a985fd2b96640eb9867abf670a2b57f495be62015f4f2e\" id:\"980b6a625e3a3cd278fda2af18b5c7cf3dae47fb3dbd5b61853bbfc05c926aa0\" pid:5690 exited_at:{seconds:1757461518 nanos:282246866}" Sep 9 23:45:21.005012 systemd[1]: Started sshd@7-172.31.18.64:22-139.178.89.65:44778.service - OpenSSH per-connection server daemon (139.178.89.65:44778). Sep 9 23:45:21.210450 sshd[5706]: Accepted publickey for core from 139.178.89.65 port 44778 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:21.213547 sshd-session[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:21.225507 systemd-logind[1981]: New session 8 of user core. Sep 9 23:45:21.234698 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 23:45:21.537582 sshd[5709]: Connection closed by 139.178.89.65 port 44778 Sep 9 23:45:21.538025 sshd-session[5706]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:21.546186 systemd-logind[1981]: Session 8 logged out. Waiting for processes to exit. Sep 9 23:45:21.547520 systemd[1]: sshd@7-172.31.18.64:22-139.178.89.65:44778.service: Deactivated successfully. Sep 9 23:45:21.553682 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 23:45:21.557882 systemd-logind[1981]: Removed session 8. Sep 9 23:45:23.575215 containerd[2011]: time="2025-09-09T23:45:23.575044074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:23.577292 containerd[2011]: time="2025-09-09T23:45:23.577219346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 9 23:45:23.579913 containerd[2011]: time="2025-09-09T23:45:23.579575247Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:23.586322 containerd[2011]: time="2025-09-09T23:45:23.586161739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:23.589255 containerd[2011]: time="2025-09-09T23:45:23.589169074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 6.182396973s" Sep 9 23:45:23.589994 containerd[2011]: time="2025-09-09T23:45:23.589225093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 23:45:23.592745 containerd[2011]: time="2025-09-09T23:45:23.592682593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 23:45:23.598046 containerd[2011]: time="2025-09-09T23:45:23.597872979Z" level=info msg="CreateContainer within sandbox \"f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 23:45:23.624969 containerd[2011]: time="2025-09-09T23:45:23.624898053Z" level=info msg="Container ffdd8eb7790bc5de1036937aa8c33130cd53182691fa0c2f4facc522745745e9: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:23.640548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4083906812.mount: Deactivated successfully. Sep 9 23:45:23.656026 containerd[2011]: time="2025-09-09T23:45:23.655903068Z" level=info msg="CreateContainer within sandbox \"f69a9417f4366d47c315fc8a52761e915898cc151a80a9ffce41136aa69fc772\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ffdd8eb7790bc5de1036937aa8c33130cd53182691fa0c2f4facc522745745e9\"" Sep 9 23:45:23.658350 containerd[2011]: time="2025-09-09T23:45:23.658280232Z" level=info msg="StartContainer for \"ffdd8eb7790bc5de1036937aa8c33130cd53182691fa0c2f4facc522745745e9\"" Sep 9 23:45:23.664903 containerd[2011]: time="2025-09-09T23:45:23.664116517Z" level=info msg="connecting to shim ffdd8eb7790bc5de1036937aa8c33130cd53182691fa0c2f4facc522745745e9" address="unix:///run/containerd/s/e3e57750e739ff9720c31818c7a5f36ae5c75f2a86091f9daa7922e9e351bf20" protocol=ttrpc version=3 Sep 9 23:45:23.721699 systemd[1]: Started cri-containerd-ffdd8eb7790bc5de1036937aa8c33130cd53182691fa0c2f4facc522745745e9.scope - libcontainer container ffdd8eb7790bc5de1036937aa8c33130cd53182691fa0c2f4facc522745745e9. Sep 9 23:45:23.817367 containerd[2011]: time="2025-09-09T23:45:23.817175585Z" level=info msg="StartContainer for \"ffdd8eb7790bc5de1036937aa8c33130cd53182691fa0c2f4facc522745745e9\" returns successfully" Sep 9 23:45:23.955788 containerd[2011]: time="2025-09-09T23:45:23.954307338Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:23.958221 containerd[2011]: time="2025-09-09T23:45:23.958154517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 23:45:23.961950 containerd[2011]: time="2025-09-09T23:45:23.961863170Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 369.112875ms" Sep 9 23:45:23.962084 containerd[2011]: time="2025-09-09T23:45:23.961947440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 23:45:23.965817 containerd[2011]: time="2025-09-09T23:45:23.965763115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 23:45:23.971057 containerd[2011]: time="2025-09-09T23:45:23.970519700Z" level=info msg="CreateContainer within sandbox \"a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 23:45:23.990799 containerd[2011]: time="2025-09-09T23:45:23.990734323Z" level=info msg="Container 2e248f1f0ec538ef258d729ea35a9d8287666151ac5249976d1f164720f4d985: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:24.016072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158970499.mount: Deactivated successfully. Sep 9 23:45:24.022085 containerd[2011]: time="2025-09-09T23:45:24.022019762Z" level=info msg="CreateContainer within sandbox \"a5262a6f124776612e0be68bc6faf8756b23a0190c9672db41195376445fb831\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2e248f1f0ec538ef258d729ea35a9d8287666151ac5249976d1f164720f4d985\"" Sep 9 23:45:24.023776 containerd[2011]: time="2025-09-09T23:45:24.023706773Z" level=info msg="StartContainer for \"2e248f1f0ec538ef258d729ea35a9d8287666151ac5249976d1f164720f4d985\"" Sep 9 23:45:24.029818 containerd[2011]: time="2025-09-09T23:45:24.029698739Z" level=info msg="connecting to shim 2e248f1f0ec538ef258d729ea35a9d8287666151ac5249976d1f164720f4d985" address="unix:///run/containerd/s/0d6b942620ea319137778b88159943f1cdaa7b98979e0a4a0afa004437bbf7db" protocol=ttrpc version=3 Sep 9 23:45:24.070693 systemd[1]: Started cri-containerd-2e248f1f0ec538ef258d729ea35a9d8287666151ac5249976d1f164720f4d985.scope - libcontainer container 2e248f1f0ec538ef258d729ea35a9d8287666151ac5249976d1f164720f4d985. Sep 9 23:45:24.174017 containerd[2011]: time="2025-09-09T23:45:24.173953796Z" level=info msg="StartContainer for \"2e248f1f0ec538ef258d729ea35a9d8287666151ac5249976d1f164720f4d985\" returns successfully" Sep 9 23:45:24.281294 kubelet[3314]: I0909 23:45:24.281065 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-755779d75d-f8ptg" podStartSLOduration=34.199376197 podStartE2EDuration="48.280974656s" podCreationTimestamp="2025-09-09 23:44:36 +0000 UTC" firstStartedPulling="2025-09-09 23:45:09.882098988 +0000 UTC m=+53.456918166" lastFinishedPulling="2025-09-09 23:45:23.963697459 +0000 UTC m=+67.538516625" observedRunningTime="2025-09-09 23:45:24.279610486 +0000 UTC m=+67.854429677" watchObservedRunningTime="2025-09-09 23:45:24.280974656 +0000 UTC m=+67.855793846" Sep 9 23:45:24.282580 kubelet[3314]: I0909 23:45:24.282462 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-755779d75d-47hsp" podStartSLOduration=33.944495024 podStartE2EDuration="48.282440673s" podCreationTimestamp="2025-09-09 23:44:36 +0000 UTC" firstStartedPulling="2025-09-09 23:45:09.254018301 +0000 UTC m=+52.828837479" lastFinishedPulling="2025-09-09 23:45:23.59196395 +0000 UTC m=+67.166783128" observedRunningTime="2025-09-09 23:45:24.256626704 +0000 UTC m=+67.831445906" watchObservedRunningTime="2025-09-09 23:45:24.282440673 +0000 UTC m=+67.857259851" Sep 9 23:45:26.250160 kubelet[3314]: I0909 23:45:26.249627 3314 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:45:26.585411 systemd[1]: Started sshd@8-172.31.18.64:22-139.178.89.65:44784.service - OpenSSH per-connection server daemon (139.178.89.65:44784). Sep 9 23:45:26.810575 sshd[5812]: Accepted publickey for core from 139.178.89.65 port 44784 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:26.813309 sshd-session[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:26.822732 systemd-logind[1981]: New session 9 of user core. Sep 9 23:45:26.830858 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 23:45:27.166734 sshd[5815]: Connection closed by 139.178.89.65 port 44784 Sep 9 23:45:27.167719 sshd-session[5812]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:27.180569 systemd[1]: sshd@8-172.31.18.64:22-139.178.89.65:44784.service: Deactivated successfully. Sep 9 23:45:27.187210 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 23:45:27.192072 systemd-logind[1981]: Session 9 logged out. Waiting for processes to exit. Sep 9 23:45:27.195783 systemd-logind[1981]: Removed session 9. Sep 9 23:45:29.054266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3415054113.mount: Deactivated successfully. Sep 9 23:45:30.199871 containerd[2011]: time="2025-09-09T23:45:30.199813850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:30.202702 containerd[2011]: time="2025-09-09T23:45:30.202629389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 9 23:45:30.206787 containerd[2011]: time="2025-09-09T23:45:30.205336682Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:30.211399 containerd[2011]: time="2025-09-09T23:45:30.211319284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:30.213408 containerd[2011]: time="2025-09-09T23:45:30.213142623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 6.247322288s" Sep 9 23:45:30.213408 containerd[2011]: time="2025-09-09T23:45:30.213200732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 9 23:45:30.221061 containerd[2011]: time="2025-09-09T23:45:30.219351514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 23:45:30.223544 containerd[2011]: time="2025-09-09T23:45:30.223403095Z" level=info msg="CreateContainer within sandbox \"86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 23:45:30.244401 containerd[2011]: time="2025-09-09T23:45:30.241183117Z" level=info msg="Container 7a53d8750db6eff2c0524fe14584cfd25c3df728c17b9aab5e61d8432aebff17: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:30.274227 containerd[2011]: time="2025-09-09T23:45:30.274165280Z" level=info msg="CreateContainer within sandbox \"86c1c255f9911397746ab20e28ad9276f999254f431d2c95787c82766498e07b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7a53d8750db6eff2c0524fe14584cfd25c3df728c17b9aab5e61d8432aebff17\"" Sep 9 23:45:30.275948 containerd[2011]: time="2025-09-09T23:45:30.275901311Z" level=info msg="StartContainer for \"7a53d8750db6eff2c0524fe14584cfd25c3df728c17b9aab5e61d8432aebff17\"" Sep 9 23:45:30.283403 containerd[2011]: time="2025-09-09T23:45:30.280847747Z" level=info msg="connecting to shim 7a53d8750db6eff2c0524fe14584cfd25c3df728c17b9aab5e61d8432aebff17" address="unix:///run/containerd/s/7867237a55d41ebe23a23819456d76c9a160566af1053ba4d2dc0f3a868fc10f" protocol=ttrpc version=3 Sep 9 23:45:30.363718 systemd[1]: Started cri-containerd-7a53d8750db6eff2c0524fe14584cfd25c3df728c17b9aab5e61d8432aebff17.scope - libcontainer container 7a53d8750db6eff2c0524fe14584cfd25c3df728c17b9aab5e61d8432aebff17. Sep 9 23:45:30.531579 containerd[2011]: time="2025-09-09T23:45:30.531327282Z" level=info msg="StartContainer for \"7a53d8750db6eff2c0524fe14584cfd25c3df728c17b9aab5e61d8432aebff17\" returns successfully" Sep 9 23:45:31.330758 kubelet[3314]: I0909 23:45:31.330196 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-p985n" podStartSLOduration=29.250115033 podStartE2EDuration="48.330114142s" podCreationTimestamp="2025-09-09 23:44:43 +0000 UTC" firstStartedPulling="2025-09-09 23:45:11.137970611 +0000 UTC m=+54.712789789" lastFinishedPulling="2025-09-09 23:45:30.217969732 +0000 UTC m=+73.792788898" observedRunningTime="2025-09-09 23:45:31.323833239 +0000 UTC m=+74.898652429" watchObservedRunningTime="2025-09-09 23:45:31.330114142 +0000 UTC m=+74.904933344" Sep 9 23:45:31.527209 containerd[2011]: time="2025-09-09T23:45:31.527122422Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a53d8750db6eff2c0524fe14584cfd25c3df728c17b9aab5e61d8432aebff17\" id:\"f4e3f1ba854655b4af859c73bc057aab5d630279f9652859f246ba1d330dfeef\" pid:5895 exited_at:{seconds:1757461531 nanos:526141340}" Sep 9 23:45:32.040523 containerd[2011]: time="2025-09-09T23:45:32.040439244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:32.042760 containerd[2011]: time="2025-09-09T23:45:32.042698293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 9 23:45:32.044910 containerd[2011]: time="2025-09-09T23:45:32.044822394Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:32.050820 containerd[2011]: time="2025-09-09T23:45:32.050750465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:32.053982 containerd[2011]: time="2025-09-09T23:45:32.053802246Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.832729552s" Sep 9 23:45:32.053982 containerd[2011]: time="2025-09-09T23:45:32.053857858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 9 23:45:32.058887 containerd[2011]: time="2025-09-09T23:45:32.058829818Z" level=info msg="CreateContainer within sandbox \"c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 23:45:32.077958 containerd[2011]: time="2025-09-09T23:45:32.075537692Z" level=info msg="Container 5ae775abc5f364be8a7871fac12c948eb906bbd9c2bf0f22eb1ab91a0236cdd1: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:32.108051 containerd[2011]: time="2025-09-09T23:45:32.107976968Z" level=info msg="CreateContainer within sandbox \"c124e39f5f4c0232180109b6a22a6e933f02e2a2212a0822025fe991209f029b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5ae775abc5f364be8a7871fac12c948eb906bbd9c2bf0f22eb1ab91a0236cdd1\"" Sep 9 23:45:32.108977 containerd[2011]: time="2025-09-09T23:45:32.108894082Z" level=info msg="StartContainer for \"5ae775abc5f364be8a7871fac12c948eb906bbd9c2bf0f22eb1ab91a0236cdd1\"" Sep 9 23:45:32.113523 containerd[2011]: time="2025-09-09T23:45:32.113366426Z" level=info msg="connecting to shim 5ae775abc5f364be8a7871fac12c948eb906bbd9c2bf0f22eb1ab91a0236cdd1" address="unix:///run/containerd/s/7fdac8837c89fcd63aaee2ff71f809f46e7d7ded4aaf1e3d1feef0c8daeebc15" protocol=ttrpc version=3 Sep 9 23:45:32.167683 systemd[1]: Started cri-containerd-5ae775abc5f364be8a7871fac12c948eb906bbd9c2bf0f22eb1ab91a0236cdd1.scope - libcontainer container 5ae775abc5f364be8a7871fac12c948eb906bbd9c2bf0f22eb1ab91a0236cdd1. Sep 9 23:45:32.208769 systemd[1]: Started sshd@9-172.31.18.64:22-139.178.89.65:59774.service - OpenSSH per-connection server daemon (139.178.89.65:59774). Sep 9 23:45:32.287497 containerd[2011]: time="2025-09-09T23:45:32.286670407Z" level=info msg="StartContainer for \"5ae775abc5f364be8a7871fac12c948eb906bbd9c2bf0f22eb1ab91a0236cdd1\" returns successfully" Sep 9 23:45:32.435623 sshd[5932]: Accepted publickey for core from 139.178.89.65 port 59774 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:32.437918 sshd-session[5932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:32.448203 systemd-logind[1981]: New session 10 of user core. Sep 9 23:45:32.455628 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 23:45:32.829456 sshd[5947]: Connection closed by 139.178.89.65 port 59774 Sep 9 23:45:32.830365 sshd-session[5932]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:32.842160 systemd[1]: sshd@9-172.31.18.64:22-139.178.89.65:59774.service: Deactivated successfully. Sep 9 23:45:32.849963 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 23:45:32.856512 systemd-logind[1981]: Session 10 logged out. Waiting for processes to exit. Sep 9 23:45:32.873121 kubelet[3314]: I0909 23:45:32.873075 3314 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 23:45:32.875917 kubelet[3314]: I0909 23:45:32.873133 3314 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 23:45:32.882846 systemd[1]: Started sshd@10-172.31.18.64:22-139.178.89.65:59786.service - OpenSSH per-connection server daemon (139.178.89.65:59786). Sep 9 23:45:32.884348 systemd-logind[1981]: Removed session 10. Sep 9 23:45:33.128955 sshd[5960]: Accepted publickey for core from 139.178.89.65 port 59786 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:33.132205 sshd-session[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:33.149172 systemd-logind[1981]: New session 11 of user core. Sep 9 23:45:33.157619 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 23:45:33.605888 sshd[5981]: Connection closed by 139.178.89.65 port 59786 Sep 9 23:45:33.610271 sshd-session[5960]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:33.620118 systemd-logind[1981]: Session 11 logged out. Waiting for processes to exit. Sep 9 23:45:33.622248 systemd[1]: sshd@10-172.31.18.64:22-139.178.89.65:59786.service: Deactivated successfully. Sep 9 23:45:33.632147 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 23:45:33.663157 systemd[1]: Started sshd@11-172.31.18.64:22-139.178.89.65:59790.service - OpenSSH per-connection server daemon (139.178.89.65:59790). Sep 9 23:45:33.663789 systemd-logind[1981]: Removed session 11. Sep 9 23:45:33.877975 containerd[2011]: time="2025-09-09T23:45:33.877820604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28a9f5296fae2315b433fc475c9acb4c13029248f74d8215fbd8db2c901f65ef\" id:\"197fbca5e44f848de2aa51bb49106e6c8f8acd8e087804f92ed38e2b20e3d594\" pid:5976 exited_at:{seconds:1757461533 nanos:877198741}" Sep 9 23:45:33.887558 sshd[5997]: Accepted publickey for core from 139.178.89.65 port 59790 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:33.890578 sshd-session[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:33.908471 systemd-logind[1981]: New session 12 of user core. Sep 9 23:45:33.916726 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 23:45:33.950177 kubelet[3314]: I0909 23:45:33.949830 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-44bk5" podStartSLOduration=24.029797927 podStartE2EDuration="49.949806527s" podCreationTimestamp="2025-09-09 23:44:44 +0000 UTC" firstStartedPulling="2025-09-09 23:45:06.135422968 +0000 UTC m=+49.710242146" lastFinishedPulling="2025-09-09 23:45:32.055431568 +0000 UTC m=+75.630250746" observedRunningTime="2025-09-09 23:45:32.354868354 +0000 UTC m=+75.929687556" watchObservedRunningTime="2025-09-09 23:45:33.949806527 +0000 UTC m=+77.524625705" Sep 9 23:45:34.253399 sshd[6002]: Connection closed by 139.178.89.65 port 59790 Sep 9 23:45:34.254300 sshd-session[5997]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:34.264868 systemd[1]: sshd@11-172.31.18.64:22-139.178.89.65:59790.service: Deactivated successfully. Sep 9 23:45:34.274888 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 23:45:34.277765 systemd-logind[1981]: Session 12 logged out. Waiting for processes to exit. Sep 9 23:45:34.283320 systemd-logind[1981]: Removed session 12. Sep 9 23:45:39.300210 systemd[1]: Started sshd@12-172.31.18.64:22-139.178.89.65:59800.service - OpenSSH per-connection server daemon (139.178.89.65:59800). Sep 9 23:45:39.503130 sshd[6022]: Accepted publickey for core from 139.178.89.65 port 59800 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:39.505883 sshd-session[6022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:39.515438 systemd-logind[1981]: New session 13 of user core. Sep 9 23:45:39.523671 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 23:45:39.808212 sshd[6025]: Connection closed by 139.178.89.65 port 59800 Sep 9 23:45:39.809286 sshd-session[6022]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:39.816893 systemd[1]: sshd@12-172.31.18.64:22-139.178.89.65:59800.service: Deactivated successfully. Sep 9 23:45:39.821998 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 23:45:39.824439 systemd-logind[1981]: Session 13 logged out. Waiting for processes to exit. Sep 9 23:45:39.829491 systemd-logind[1981]: Removed session 13. Sep 9 23:45:44.852784 systemd[1]: Started sshd@13-172.31.18.64:22-139.178.89.65:38778.service - OpenSSH per-connection server daemon (139.178.89.65:38778). Sep 9 23:45:45.080612 sshd[6038]: Accepted publickey for core from 139.178.89.65 port 38778 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:45.084311 sshd-session[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:45.099309 systemd-logind[1981]: New session 14 of user core. Sep 9 23:45:45.107242 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 23:45:45.433542 sshd[6041]: Connection closed by 139.178.89.65 port 38778 Sep 9 23:45:45.434154 sshd-session[6038]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:45.445179 systemd[1]: sshd@13-172.31.18.64:22-139.178.89.65:38778.service: Deactivated successfully. Sep 9 23:45:45.451958 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 23:45:45.455846 systemd-logind[1981]: Session 14 logged out. Waiting for processes to exit. Sep 9 23:45:45.461483 systemd-logind[1981]: Removed session 14. Sep 9 23:45:48.273465 containerd[2011]: time="2025-09-09T23:45:48.273285908Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad2bdeccbd8a00ed32a985fd2b96640eb9867abf670a2b57f495be62015f4f2e\" id:\"4dc5b864f6b855f39b5a8b5ef10ce451ad82d676f528b45020a5b26aab06c8c2\" pid:6070 exited_at:{seconds:1757461548 nanos:272532315}" Sep 9 23:45:50.479175 systemd[1]: Started sshd@14-172.31.18.64:22-139.178.89.65:34108.service - OpenSSH per-connection server daemon (139.178.89.65:34108). Sep 9 23:45:50.688286 sshd[6080]: Accepted publickey for core from 139.178.89.65 port 34108 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:50.691277 sshd-session[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:50.699609 systemd-logind[1981]: New session 15 of user core. Sep 9 23:45:50.705689 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 23:45:50.967040 sshd[6083]: Connection closed by 139.178.89.65 port 34108 Sep 9 23:45:50.967533 sshd-session[6080]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:50.975591 systemd[1]: sshd@14-172.31.18.64:22-139.178.89.65:34108.service: Deactivated successfully. Sep 9 23:45:50.982029 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 23:45:50.985629 systemd-logind[1981]: Session 15 logged out. Waiting for processes to exit. Sep 9 23:45:50.989520 systemd-logind[1981]: Removed session 15. Sep 9 23:45:55.428875 containerd[2011]: time="2025-09-09T23:45:55.428796477Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad2bdeccbd8a00ed32a985fd2b96640eb9867abf670a2b57f495be62015f4f2e\" id:\"54e7b68cb0b95155104ce7f571184e6761733f264af86aff06a77c876cf6bd03\" pid:6112 exited_at:{seconds:1757461555 nanos:428478162}" Sep 9 23:45:56.007531 systemd[1]: Started sshd@15-172.31.18.64:22-139.178.89.65:34110.service - OpenSSH per-connection server daemon (139.178.89.65:34110). Sep 9 23:45:56.208704 sshd[6122]: Accepted publickey for core from 139.178.89.65 port 34110 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:56.211113 sshd-session[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:56.219346 systemd-logind[1981]: New session 16 of user core. Sep 9 23:45:56.232813 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 23:45:56.504630 sshd[6125]: Connection closed by 139.178.89.65 port 34110 Sep 9 23:45:56.505974 sshd-session[6122]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:56.513539 systemd[1]: sshd@15-172.31.18.64:22-139.178.89.65:34110.service: Deactivated successfully. Sep 9 23:45:56.519363 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 23:45:56.521522 systemd-logind[1981]: Session 16 logged out. Waiting for processes to exit. Sep 9 23:45:56.524877 systemd-logind[1981]: Removed session 16. Sep 9 23:45:56.539305 systemd[1]: Started sshd@16-172.31.18.64:22-139.178.89.65:34118.service - OpenSSH per-connection server daemon (139.178.89.65:34118). Sep 9 23:45:56.750509 sshd[6136]: Accepted publickey for core from 139.178.89.65 port 34118 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:56.753038 sshd-session[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:56.763489 systemd-logind[1981]: New session 17 of user core. Sep 9 23:45:56.769660 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 23:45:57.384030 sshd[6139]: Connection closed by 139.178.89.65 port 34118 Sep 9 23:45:57.385504 sshd-session[6136]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:57.392680 systemd[1]: sshd@16-172.31.18.64:22-139.178.89.65:34118.service: Deactivated successfully. Sep 9 23:45:57.397222 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 23:45:57.399112 systemd-logind[1981]: Session 17 logged out. Waiting for processes to exit. Sep 9 23:45:57.404481 systemd-logind[1981]: Removed session 17. Sep 9 23:45:57.421480 systemd[1]: Started sshd@17-172.31.18.64:22-139.178.89.65:34120.service - OpenSSH per-connection server daemon (139.178.89.65:34120). Sep 9 23:45:57.628441 sshd[6149]: Accepted publickey for core from 139.178.89.65 port 34120 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:57.630975 sshd-session[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:57.642862 systemd-logind[1981]: New session 18 of user core. Sep 9 23:45:57.652665 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 23:45:58.776250 sshd[6152]: Connection closed by 139.178.89.65 port 34120 Sep 9 23:45:58.777580 sshd-session[6149]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:58.789654 systemd-logind[1981]: Session 18 logged out. Waiting for processes to exit. Sep 9 23:45:58.789828 systemd[1]: sshd@17-172.31.18.64:22-139.178.89.65:34120.service: Deactivated successfully. Sep 9 23:45:58.795294 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 23:45:58.831314 systemd-logind[1981]: Removed session 18. Sep 9 23:45:58.835256 systemd[1]: Started sshd@18-172.31.18.64:22-139.178.89.65:34122.service - OpenSSH per-connection server daemon (139.178.89.65:34122). Sep 9 23:45:59.058206 sshd[6167]: Accepted publickey for core from 139.178.89.65 port 34122 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:59.060990 sshd-session[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:59.073042 systemd-logind[1981]: New session 19 of user core. Sep 9 23:45:59.084700 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 23:45:59.636423 sshd[6172]: Connection closed by 139.178.89.65 port 34122 Sep 9 23:45:59.636618 sshd-session[6167]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:59.645358 systemd[1]: sshd@18-172.31.18.64:22-139.178.89.65:34122.service: Deactivated successfully. Sep 9 23:45:59.650988 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 23:45:59.655270 systemd-logind[1981]: Session 19 logged out. Waiting for processes to exit. Sep 9 23:45:59.673601 systemd[1]: Started sshd@19-172.31.18.64:22-139.178.89.65:34138.service - OpenSSH per-connection server daemon (139.178.89.65:34138). Sep 9 23:45:59.676943 systemd-logind[1981]: Removed session 19. Sep 9 23:45:59.867009 sshd[6182]: Accepted publickey for core from 139.178.89.65 port 34138 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:59.869490 sshd-session[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:59.878511 systemd-logind[1981]: New session 20 of user core. Sep 9 23:45:59.885700 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 23:46:00.139745 sshd[6185]: Connection closed by 139.178.89.65 port 34138 Sep 9 23:46:00.139979 sshd-session[6182]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:00.147322 systemd[1]: sshd@19-172.31.18.64:22-139.178.89.65:34138.service: Deactivated successfully. Sep 9 23:46:00.152725 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 23:46:00.154984 systemd-logind[1981]: Session 20 logged out. Waiting for processes to exit. Sep 9 23:46:00.158755 systemd-logind[1981]: Removed session 20. Sep 9 23:46:01.440421 containerd[2011]: time="2025-09-09T23:46:01.440192266Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a53d8750db6eff2c0524fe14584cfd25c3df728c17b9aab5e61d8432aebff17\" id:\"9ad3c070fb166a62ce747c6d27ee3b6079aa1eb0d49cca4308d2ce71a5f29a64\" pid:6208 exited_at:{seconds:1757461561 nanos:439786920}" Sep 9 23:46:03.165063 containerd[2011]: time="2025-09-09T23:46:03.164963109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28a9f5296fae2315b433fc475c9acb4c13029248f74d8215fbd8db2c901f65ef\" id:\"95252f4eae24741f74855a5db1e400da4b557e8c8569c2e4193dd61b34e1dc79\" pid:6229 exited_at:{seconds:1757461563 nanos:164158911}" Sep 9 23:46:05.183936 systemd[1]: Started sshd@20-172.31.18.64:22-139.178.89.65:54020.service - OpenSSH per-connection server daemon (139.178.89.65:54020). Sep 9 23:46:05.396194 sshd[6243]: Accepted publickey for core from 139.178.89.65 port 54020 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:05.399873 sshd-session[6243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:05.409485 systemd-logind[1981]: New session 21 of user core. Sep 9 23:46:05.416617 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 23:46:05.676894 sshd[6246]: Connection closed by 139.178.89.65 port 54020 Sep 9 23:46:05.677353 sshd-session[6243]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:05.684676 systemd[1]: sshd@20-172.31.18.64:22-139.178.89.65:54020.service: Deactivated successfully. Sep 9 23:46:05.690199 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 23:46:05.693577 systemd-logind[1981]: Session 21 logged out. Waiting for processes to exit. Sep 9 23:46:05.697286 systemd-logind[1981]: Removed session 21. Sep 9 23:46:10.719891 systemd[1]: Started sshd@21-172.31.18.64:22-139.178.89.65:59268.service - OpenSSH per-connection server daemon (139.178.89.65:59268). Sep 9 23:46:10.918058 sshd[6260]: Accepted publickey for core from 139.178.89.65 port 59268 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:10.920565 sshd-session[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:10.928604 systemd-logind[1981]: New session 22 of user core. Sep 9 23:46:10.941655 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 23:46:11.196493 sshd[6263]: Connection closed by 139.178.89.65 port 59268 Sep 9 23:46:11.197784 sshd-session[6260]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:11.205123 systemd[1]: sshd@21-172.31.18.64:22-139.178.89.65:59268.service: Deactivated successfully. Sep 9 23:46:11.211461 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 23:46:11.214833 systemd-logind[1981]: Session 22 logged out. Waiting for processes to exit. Sep 9 23:46:11.217519 systemd-logind[1981]: Removed session 22. Sep 9 23:46:14.086509 containerd[2011]: time="2025-09-09T23:46:14.086443760Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a53d8750db6eff2c0524fe14584cfd25c3df728c17b9aab5e61d8432aebff17\" id:\"b3f4675cebf16e1cb76e3e3966228cba8938b673305f0213082875bcaff11d2d\" pid:6287 exited_at:{seconds:1757461574 nanos:85572617}" Sep 9 23:46:16.244234 systemd[1]: Started sshd@22-172.31.18.64:22-139.178.89.65:59270.service - OpenSSH per-connection server daemon (139.178.89.65:59270). Sep 9 23:46:16.459660 sshd[6298]: Accepted publickey for core from 139.178.89.65 port 59270 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:16.462783 sshd-session[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:16.478788 systemd-logind[1981]: New session 23 of user core. Sep 9 23:46:16.486062 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 23:46:16.853998 sshd[6301]: Connection closed by 139.178.89.65 port 59270 Sep 9 23:46:16.855187 sshd-session[6298]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:16.869163 systemd[1]: sshd@22-172.31.18.64:22-139.178.89.65:59270.service: Deactivated successfully. Sep 9 23:46:16.878312 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 23:46:16.886720 systemd-logind[1981]: Session 23 logged out. Waiting for processes to exit. Sep 9 23:46:16.893989 systemd-logind[1981]: Removed session 23. Sep 9 23:46:18.293608 containerd[2011]: time="2025-09-09T23:46:18.293538105Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad2bdeccbd8a00ed32a985fd2b96640eb9867abf670a2b57f495be62015f4f2e\" id:\"2ef5f838f134b4d645fa01f2bfd340e1a341424551ca59f910cbdc1da65aea66\" pid:6328 exited_at:{seconds:1757461578 nanos:292543252}" Sep 9 23:46:21.902879 systemd[1]: Started sshd@23-172.31.18.64:22-139.178.89.65:35340.service - OpenSSH per-connection server daemon (139.178.89.65:35340). Sep 9 23:46:22.127610 sshd[6338]: Accepted publickey for core from 139.178.89.65 port 35340 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:22.130617 sshd-session[6338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:22.142947 systemd-logind[1981]: New session 24 of user core. Sep 9 23:46:22.150626 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 23:46:22.460633 sshd[6341]: Connection closed by 139.178.89.65 port 35340 Sep 9 23:46:22.461127 sshd-session[6338]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:22.472110 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 23:46:22.475147 systemd[1]: sshd@23-172.31.18.64:22-139.178.89.65:35340.service: Deactivated successfully. Sep 9 23:46:22.487741 systemd-logind[1981]: Session 24 logged out. Waiting for processes to exit. Sep 9 23:46:22.493501 systemd-logind[1981]: Removed session 24. Sep 9 23:46:27.501891 systemd[1]: Started sshd@24-172.31.18.64:22-139.178.89.65:35356.service - OpenSSH per-connection server daemon (139.178.89.65:35356). Sep 9 23:46:27.711147 sshd[6361]: Accepted publickey for core from 139.178.89.65 port 35356 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:27.714229 sshd-session[6361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:27.724460 systemd-logind[1981]: New session 25 of user core. Sep 9 23:46:27.732713 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 23:46:28.069536 sshd[6364]: Connection closed by 139.178.89.65 port 35356 Sep 9 23:46:28.070308 sshd-session[6361]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:28.081515 systemd-logind[1981]: Session 25 logged out. Waiting for processes to exit. Sep 9 23:46:28.083448 systemd[1]: sshd@24-172.31.18.64:22-139.178.89.65:35356.service: Deactivated successfully. Sep 9 23:46:28.090322 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 23:46:28.098507 systemd-logind[1981]: Removed session 25. Sep 9 23:46:31.507464 containerd[2011]: time="2025-09-09T23:46:31.507159305Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a53d8750db6eff2c0524fe14584cfd25c3df728c17b9aab5e61d8432aebff17\" id:\"9605d26958bb27f4237c4b99eaa4417a15d0c14f4e7e06f08e4443e2860d791b\" pid:6389 exited_at:{seconds:1757461591 nanos:506019029}" Sep 9 23:46:33.115096 systemd[1]: Started sshd@25-172.31.18.64:22-139.178.89.65:51092.service - OpenSSH per-connection server daemon (139.178.89.65:51092). Sep 9 23:46:33.262613 containerd[2011]: time="2025-09-09T23:46:33.262552230Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28a9f5296fae2315b433fc475c9acb4c13029248f74d8215fbd8db2c901f65ef\" id:\"b961213155a683fa30512dc4f1285b07007a7277829948cfbe23558b67e3376f\" pid:6413 exited_at:{seconds:1757461593 nanos:262096062}" Sep 9 23:46:33.336706 sshd[6424]: Accepted publickey for core from 139.178.89.65 port 51092 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:33.339346 sshd-session[6424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:33.355473 systemd-logind[1981]: New session 26 of user core. Sep 9 23:46:33.363795 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 23:46:33.705681 sshd[6429]: Connection closed by 139.178.89.65 port 51092 Sep 9 23:46:33.706933 sshd-session[6424]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:33.720068 systemd[1]: sshd@25-172.31.18.64:22-139.178.89.65:51092.service: Deactivated successfully. Sep 9 23:46:33.725764 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 23:46:33.729409 systemd-logind[1981]: Session 26 logged out. Waiting for processes to exit. Sep 9 23:46:33.734152 systemd-logind[1981]: Removed session 26.