Sep 9 23:43:17.092812 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 9 23:43:17.092859 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 22:10:22 -00 2025 Sep 9 23:43:17.092883 kernel: KASLR disabled due to lack of seed Sep 9 23:43:17.092899 kernel: efi: EFI v2.7 by EDK II Sep 9 23:43:17.092930 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Sep 9 23:43:17.092950 kernel: secureboot: Secure boot disabled Sep 9 23:43:17.092969 kernel: ACPI: Early table checksum verification disabled Sep 9 23:43:17.092984 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 9 23:43:17.093000 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 9 23:43:17.093017 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 9 23:43:17.093032 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 9 23:43:17.093054 kernel: ACPI: FACS 0x0000000078630000 000040 Sep 9 23:43:17.093069 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 9 23:43:17.093084 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 9 23:43:17.093102 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 9 23:43:17.093117 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 9 23:43:17.093138 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 9 23:43:17.093154 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 9 23:43:17.093170 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 9 23:43:17.093186 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 9 23:43:17.093202 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 9 23:43:17.093218 kernel: printk: legacy bootconsole [uart0] enabled Sep 9 23:43:17.093234 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 23:43:17.093250 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 9 23:43:17.093266 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Sep 9 23:43:17.093282 kernel: Zone ranges: Sep 9 23:43:17.093298 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 9 23:43:17.093318 kernel: DMA32 empty Sep 9 23:43:17.093334 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 9 23:43:17.093350 kernel: Device empty Sep 9 23:43:17.093365 kernel: Movable zone start for each node Sep 9 23:43:17.093381 kernel: Early memory node ranges Sep 9 23:43:17.093397 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 9 23:43:17.093412 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 9 23:43:17.093428 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 9 23:43:17.093444 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 9 23:43:17.093459 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 9 23:43:17.093475 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 9 23:43:17.093490 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 9 23:43:17.093511 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 9 23:43:17.093533 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 9 23:43:17.093550 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 9 23:43:17.093567 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Sep 9 23:43:17.093584 kernel: psci: probing for conduit method from ACPI. Sep 9 23:43:17.095203 kernel: psci: PSCIv1.0 detected in firmware. Sep 9 23:43:17.095224 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 23:43:17.095242 kernel: psci: Trusted OS migration not required Sep 9 23:43:17.095259 kernel: psci: SMC Calling Convention v1.1 Sep 9 23:43:17.095277 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 9 23:43:17.095295 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 23:43:17.095312 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 23:43:17.095330 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 9 23:43:17.095347 kernel: Detected PIPT I-cache on CPU0 Sep 9 23:43:17.095365 kernel: CPU features: detected: GIC system register CPU interface Sep 9 23:43:17.095382 kernel: CPU features: detected: Spectre-v2 Sep 9 23:43:17.095408 kernel: CPU features: detected: Spectre-v3a Sep 9 23:43:17.095426 kernel: CPU features: detected: Spectre-BHB Sep 9 23:43:17.095443 kernel: CPU features: detected: ARM erratum 1742098 Sep 9 23:43:17.095460 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 9 23:43:17.095477 kernel: alternatives: applying boot alternatives Sep 9 23:43:17.095496 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:43:17.095515 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 23:43:17.095532 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 23:43:17.095550 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 23:43:17.095567 kernel: Fallback order for Node 0: 0 Sep 9 23:43:17.095615 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Sep 9 23:43:17.095667 kernel: Policy zone: Normal Sep 9 23:43:17.095685 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 23:43:17.095702 kernel: software IO TLB: area num 2. Sep 9 23:43:17.095718 kernel: software IO TLB: mapped [mem 0x000000006c600000-0x0000000070600000] (64MB) Sep 9 23:43:17.095735 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 9 23:43:17.095751 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 23:43:17.095769 kernel: rcu: RCU event tracing is enabled. Sep 9 23:43:17.095786 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 9 23:43:17.095803 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 23:43:17.095820 kernel: Tracing variant of Tasks RCU enabled. Sep 9 23:43:17.095836 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 23:43:17.095859 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 9 23:43:17.095876 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 23:43:17.095893 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 23:43:17.095909 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 23:43:17.095925 kernel: GICv3: 96 SPIs implemented Sep 9 23:43:17.095942 kernel: GICv3: 0 Extended SPIs implemented Sep 9 23:43:17.095958 kernel: Root IRQ handler: gic_handle_irq Sep 9 23:43:17.095974 kernel: GICv3: GICv3 features: 16 PPIs Sep 9 23:43:17.095991 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 23:43:17.096007 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 9 23:43:17.096024 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 9 23:43:17.096040 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 23:43:17.096061 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Sep 9 23:43:17.096078 kernel: GICv3: using LPI property table @0x0000000400110000 Sep 9 23:43:17.096095 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 9 23:43:17.096111 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Sep 9 23:43:17.096128 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 23:43:17.096144 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 9 23:43:17.096161 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 9 23:43:17.096177 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 9 23:43:17.096194 kernel: Console: colour dummy device 80x25 Sep 9 23:43:17.096212 kernel: printk: legacy console [tty1] enabled Sep 9 23:43:17.096229 kernel: ACPI: Core revision 20240827 Sep 9 23:43:17.096250 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 9 23:43:17.096268 kernel: pid_max: default: 32768 minimum: 301 Sep 9 23:43:17.096285 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 23:43:17.096302 kernel: landlock: Up and running. Sep 9 23:43:17.096319 kernel: SELinux: Initializing. Sep 9 23:43:17.096357 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:43:17.096376 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:43:17.096393 kernel: rcu: Hierarchical SRCU implementation. Sep 9 23:43:17.096410 kernel: rcu: Max phase no-delay instances is 400. Sep 9 23:43:17.096433 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 23:43:17.096450 kernel: Remapping and enabling EFI services. Sep 9 23:43:17.096467 kernel: smp: Bringing up secondary CPUs ... Sep 9 23:43:17.096484 kernel: Detected PIPT I-cache on CPU1 Sep 9 23:43:17.096500 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 9 23:43:17.096517 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Sep 9 23:43:17.096534 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 9 23:43:17.096551 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 23:43:17.096568 kernel: SMP: Total of 2 processors activated. Sep 9 23:43:17.097696 kernel: CPU: All CPU(s) started at EL1 Sep 9 23:43:17.097723 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 23:43:17.097746 kernel: CPU features: detected: 32-bit EL1 Support Sep 9 23:43:17.097764 kernel: CPU features: detected: CRC32 instructions Sep 9 23:43:17.097782 kernel: alternatives: applying system-wide alternatives Sep 9 23:43:17.097801 kernel: Memory: 3797096K/4030464K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 212024K reserved, 16384K cma-reserved) Sep 9 23:43:17.097820 kernel: devtmpfs: initialized Sep 9 23:43:17.097843 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 23:43:17.097862 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 9 23:43:17.097880 kernel: 17056 pages in range for non-PLT usage Sep 9 23:43:17.097898 kernel: 508576 pages in range for PLT usage Sep 9 23:43:17.097916 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 23:43:17.097934 kernel: SMBIOS 3.0.0 present. Sep 9 23:43:17.097952 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 9 23:43:17.097970 kernel: DMI: Memory slots populated: 0/0 Sep 9 23:43:17.097988 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 23:43:17.098010 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 23:43:17.098029 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 23:43:17.098047 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 23:43:17.098065 kernel: audit: initializing netlink subsys (disabled) Sep 9 23:43:17.098082 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Sep 9 23:43:17.098100 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 23:43:17.098118 kernel: cpuidle: using governor menu Sep 9 23:43:17.098136 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 23:43:17.098153 kernel: ASID allocator initialised with 65536 entries Sep 9 23:43:17.098176 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 23:43:17.098194 kernel: Serial: AMBA PL011 UART driver Sep 9 23:43:17.098212 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 23:43:17.098230 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 23:43:17.098248 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 23:43:17.098266 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 23:43:17.098284 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 23:43:17.098301 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 23:43:17.098320 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 23:43:17.098342 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 23:43:17.098360 kernel: ACPI: Added _OSI(Module Device) Sep 9 23:43:17.098378 kernel: ACPI: Added _OSI(Processor Device) Sep 9 23:43:17.098396 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 23:43:17.098414 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 23:43:17.098432 kernel: ACPI: Interpreter enabled Sep 9 23:43:17.098449 kernel: ACPI: Using GIC for interrupt routing Sep 9 23:43:17.098466 kernel: ACPI: MCFG table detected, 1 entries Sep 9 23:43:17.098484 kernel: ACPI: CPU0 has been hot-added Sep 9 23:43:17.098506 kernel: ACPI: CPU1 has been hot-added Sep 9 23:43:17.098524 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 9 23:43:17.099994 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 23:43:17.100204 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 23:43:17.100411 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 23:43:17.100619 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 9 23:43:17.100805 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 9 23:43:17.100838 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 9 23:43:17.100857 kernel: acpiphp: Slot [1] registered Sep 9 23:43:17.100875 kernel: acpiphp: Slot [2] registered Sep 9 23:43:17.100892 kernel: acpiphp: Slot [3] registered Sep 9 23:43:17.100910 kernel: acpiphp: Slot [4] registered Sep 9 23:43:17.100927 kernel: acpiphp: Slot [5] registered Sep 9 23:43:17.100945 kernel: acpiphp: Slot [6] registered Sep 9 23:43:17.100962 kernel: acpiphp: Slot [7] registered Sep 9 23:43:17.100980 kernel: acpiphp: Slot [8] registered Sep 9 23:43:17.100997 kernel: acpiphp: Slot [9] registered Sep 9 23:43:17.101019 kernel: acpiphp: Slot [10] registered Sep 9 23:43:17.101036 kernel: acpiphp: Slot [11] registered Sep 9 23:43:17.101054 kernel: acpiphp: Slot [12] registered Sep 9 23:43:17.101071 kernel: acpiphp: Slot [13] registered Sep 9 23:43:17.101089 kernel: acpiphp: Slot [14] registered Sep 9 23:43:17.101106 kernel: acpiphp: Slot [15] registered Sep 9 23:43:17.101124 kernel: acpiphp: Slot [16] registered Sep 9 23:43:17.101141 kernel: acpiphp: Slot [17] registered Sep 9 23:43:17.101159 kernel: acpiphp: Slot [18] registered Sep 9 23:43:17.101180 kernel: acpiphp: Slot [19] registered Sep 9 23:43:17.101198 kernel: acpiphp: Slot [20] registered Sep 9 23:43:17.101216 kernel: acpiphp: Slot [21] registered Sep 9 23:43:17.101233 kernel: acpiphp: Slot [22] registered Sep 9 23:43:17.101251 kernel: acpiphp: Slot [23] registered Sep 9 23:43:17.101269 kernel: acpiphp: Slot [24] registered Sep 9 23:43:17.101286 kernel: acpiphp: Slot [25] registered Sep 9 23:43:17.101303 kernel: acpiphp: Slot [26] registered Sep 9 23:43:17.101321 kernel: acpiphp: Slot [27] registered Sep 9 23:43:17.101338 kernel: acpiphp: Slot [28] registered Sep 9 23:43:17.101360 kernel: acpiphp: Slot [29] registered Sep 9 23:43:17.101378 kernel: acpiphp: Slot [30] registered Sep 9 23:43:17.101395 kernel: acpiphp: Slot [31] registered Sep 9 23:43:17.101413 kernel: PCI host bridge to bus 0000:00 Sep 9 23:43:17.103655 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 9 23:43:17.103854 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 23:43:17.104017 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 9 23:43:17.104180 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 9 23:43:17.104433 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Sep 9 23:43:17.104677 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Sep 9 23:43:17.104871 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Sep 9 23:43:17.105079 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Sep 9 23:43:17.105272 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Sep 9 23:43:17.106816 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 9 23:43:17.107069 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Sep 9 23:43:17.107277 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Sep 9 23:43:17.107480 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Sep 9 23:43:17.107733 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Sep 9 23:43:17.107943 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 9 23:43:17.108146 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Sep 9 23:43:17.108371 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Sep 9 23:43:17.110870 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Sep 9 23:43:17.111151 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Sep 9 23:43:17.111345 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Sep 9 23:43:17.111518 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 9 23:43:17.111725 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 23:43:17.114389 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 9 23:43:17.114434 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 23:43:17.114463 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 23:43:17.114481 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 23:43:17.114499 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 23:43:17.114517 kernel: iommu: Default domain type: Translated Sep 9 23:43:17.114535 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 23:43:17.114552 kernel: efivars: Registered efivars operations Sep 9 23:43:17.114570 kernel: vgaarb: loaded Sep 9 23:43:17.114676 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 23:43:17.114701 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 23:43:17.114726 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 23:43:17.114745 kernel: pnp: PnP ACPI init Sep 9 23:43:17.114967 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 9 23:43:17.114995 kernel: pnp: PnP ACPI: found 1 devices Sep 9 23:43:17.115014 kernel: NET: Registered PF_INET protocol family Sep 9 23:43:17.115032 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 23:43:17.115051 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 23:43:17.115069 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 23:43:17.115093 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 23:43:17.115111 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 23:43:17.115129 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 23:43:17.115147 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:43:17.115165 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:43:17.115184 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 23:43:17.115201 kernel: PCI: CLS 0 bytes, default 64 Sep 9 23:43:17.115220 kernel: kvm [1]: HYP mode not available Sep 9 23:43:17.115237 kernel: Initialise system trusted keyrings Sep 9 23:43:17.115260 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 23:43:17.115278 kernel: Key type asymmetric registered Sep 9 23:43:17.115295 kernel: Asymmetric key parser 'x509' registered Sep 9 23:43:17.115313 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 23:43:17.115331 kernel: io scheduler mq-deadline registered Sep 9 23:43:17.115348 kernel: io scheduler kyber registered Sep 9 23:43:17.115365 kernel: io scheduler bfq registered Sep 9 23:43:17.115570 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 9 23:43:17.116985 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 23:43:17.117344 kernel: ACPI: button: Power Button [PWRB] Sep 9 23:43:17.117968 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 9 23:43:17.117992 kernel: ACPI: button: Sleep Button [SLPB] Sep 9 23:43:17.118010 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 23:43:17.118029 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 9 23:43:17.118242 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 9 23:43:17.118268 kernel: printk: legacy console [ttyS0] disabled Sep 9 23:43:17.118287 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 9 23:43:17.118311 kernel: printk: legacy console [ttyS0] enabled Sep 9 23:43:17.118329 kernel: printk: legacy bootconsole [uart0] disabled Sep 9 23:43:17.118347 kernel: thunder_xcv, ver 1.0 Sep 9 23:43:17.118364 kernel: thunder_bgx, ver 1.0 Sep 9 23:43:17.118382 kernel: nicpf, ver 1.0 Sep 9 23:43:17.118399 kernel: nicvf, ver 1.0 Sep 9 23:43:17.120859 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 23:43:17.121096 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T23:43:16 UTC (1757461396) Sep 9 23:43:17.121130 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 23:43:17.121149 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Sep 9 23:43:17.121167 kernel: NET: Registered PF_INET6 protocol family Sep 9 23:43:17.121185 kernel: watchdog: NMI not fully supported Sep 9 23:43:17.121203 kernel: Segment Routing with IPv6 Sep 9 23:43:17.121221 kernel: watchdog: Hard watchdog permanently disabled Sep 9 23:43:17.121238 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 23:43:17.121256 kernel: NET: Registered PF_PACKET protocol family Sep 9 23:43:17.121273 kernel: Key type dns_resolver registered Sep 9 23:43:17.121296 kernel: registered taskstats version 1 Sep 9 23:43:17.121314 kernel: Loading compiled-in X.509 certificates Sep 9 23:43:17.121331 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 61217a1897415238555e2058a4e44c51622b0f87' Sep 9 23:43:17.121350 kernel: Demotion targets for Node 0: null Sep 9 23:43:17.121368 kernel: Key type .fscrypt registered Sep 9 23:43:17.121385 kernel: Key type fscrypt-provisioning registered Sep 9 23:43:17.121403 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 23:43:17.121421 kernel: ima: Allocated hash algorithm: sha1 Sep 9 23:43:17.121439 kernel: ima: No architecture policies found Sep 9 23:43:17.121461 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 23:43:17.121480 kernel: clk: Disabling unused clocks Sep 9 23:43:17.121497 kernel: PM: genpd: Disabling unused power domains Sep 9 23:43:17.121515 kernel: Warning: unable to open an initial console. Sep 9 23:43:17.121534 kernel: Freeing unused kernel memory: 38912K Sep 9 23:43:17.121551 kernel: Run /init as init process Sep 9 23:43:17.121569 kernel: with arguments: Sep 9 23:43:17.121616 kernel: /init Sep 9 23:43:17.121639 kernel: with environment: Sep 9 23:43:17.121656 kernel: HOME=/ Sep 9 23:43:17.121681 kernel: TERM=linux Sep 9 23:43:17.121699 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 23:43:17.121718 systemd[1]: Successfully made /usr/ read-only. Sep 9 23:43:17.121743 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:43:17.121763 systemd[1]: Detected virtualization amazon. Sep 9 23:43:17.121782 systemd[1]: Detected architecture arm64. Sep 9 23:43:17.121801 systemd[1]: Running in initrd. Sep 9 23:43:17.121824 systemd[1]: No hostname configured, using default hostname. Sep 9 23:43:17.121844 systemd[1]: Hostname set to . Sep 9 23:43:17.121863 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:43:17.121882 systemd[1]: Queued start job for default target initrd.target. Sep 9 23:43:17.121901 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:43:17.121920 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:43:17.121941 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 23:43:17.121961 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:43:17.121984 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 23:43:17.122005 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 23:43:17.122027 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 23:43:17.122047 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 23:43:17.122067 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:43:17.122086 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:43:17.122105 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:43:17.122130 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:43:17.122149 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:43:17.122168 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:43:17.122188 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:43:17.122207 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:43:17.122227 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 23:43:17.122246 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 23:43:17.122266 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:43:17.122289 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:43:17.122310 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:43:17.122329 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:43:17.122349 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 23:43:17.122369 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:43:17.122389 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 23:43:17.122409 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 23:43:17.122429 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 23:43:17.122448 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:43:17.122472 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:43:17.122491 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:43:17.122511 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 23:43:17.122532 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:43:17.122556 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 23:43:17.124347 systemd-journald[258]: Collecting audit messages is disabled. Sep 9 23:43:17.125175 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:43:17.126659 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 23:43:17.126698 kernel: Bridge firewalling registered Sep 9 23:43:17.126736 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:43:17.126758 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:43:17.126781 systemd-journald[258]: Journal started Sep 9 23:43:17.126824 systemd-journald[258]: Runtime Journal (/run/log/journal/ec2d16a042ce8230c8c53ef6796717aa) is 8M, max 75.3M, 67.3M free. Sep 9 23:43:17.065117 systemd-modules-load[259]: Inserted module 'overlay' Sep 9 23:43:17.110045 systemd-modules-load[259]: Inserted module 'br_netfilter' Sep 9 23:43:17.133920 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:43:17.141627 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:43:17.148624 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:43:17.151830 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:43:17.164403 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:43:17.176189 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:43:17.190729 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:43:17.209085 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:43:17.209894 systemd-tmpfiles[285]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 23:43:17.221351 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 23:43:17.226508 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:43:17.235746 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:43:17.250317 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:43:17.276436 dracut-cmdline[297]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:43:17.349653 systemd-resolved[302]: Positive Trust Anchors: Sep 9 23:43:17.349685 systemd-resolved[302]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:43:17.349747 systemd-resolved[302]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:43:17.432650 kernel: SCSI subsystem initialized Sep 9 23:43:17.440648 kernel: Loading iSCSI transport class v2.0-870. Sep 9 23:43:17.452648 kernel: iscsi: registered transport (tcp) Sep 9 23:43:17.474114 kernel: iscsi: registered transport (qla4xxx) Sep 9 23:43:17.474205 kernel: QLogic iSCSI HBA Driver Sep 9 23:43:17.507763 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:43:17.546149 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:43:17.561065 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:43:17.616629 kernel: random: crng init done Sep 9 23:43:17.616923 systemd-resolved[302]: Defaulting to hostname 'linux'. Sep 9 23:43:17.620987 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:43:17.626014 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:43:17.651288 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 23:43:17.653666 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 23:43:17.741638 kernel: raid6: neonx8 gen() 6578 MB/s Sep 9 23:43:17.757622 kernel: raid6: neonx4 gen() 6618 MB/s Sep 9 23:43:17.774621 kernel: raid6: neonx2 gen() 5494 MB/s Sep 9 23:43:17.791621 kernel: raid6: neonx1 gen() 3969 MB/s Sep 9 23:43:17.808621 kernel: raid6: int64x8 gen() 3672 MB/s Sep 9 23:43:17.825622 kernel: raid6: int64x4 gen() 3714 MB/s Sep 9 23:43:17.842621 kernel: raid6: int64x2 gen() 3612 MB/s Sep 9 23:43:17.860547 kernel: raid6: int64x1 gen() 2775 MB/s Sep 9 23:43:17.860578 kernel: raid6: using algorithm neonx4 gen() 6618 MB/s Sep 9 23:43:17.878518 kernel: raid6: .... xor() 4844 MB/s, rmw enabled Sep 9 23:43:17.878560 kernel: raid6: using neon recovery algorithm Sep 9 23:43:17.887023 kernel: xor: measuring software checksum speed Sep 9 23:43:17.887078 kernel: 8regs : 12944 MB/sec Sep 9 23:43:17.888164 kernel: 32regs : 13044 MB/sec Sep 9 23:43:17.889422 kernel: arm64_neon : 8803 MB/sec Sep 9 23:43:17.889461 kernel: xor: using function: 32regs (13044 MB/sec) Sep 9 23:43:17.980637 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 23:43:17.991241 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:43:18.000352 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:43:18.050367 systemd-udevd[510]: Using default interface naming scheme 'v255'. Sep 9 23:43:18.060193 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:43:18.076405 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 23:43:18.105066 dracut-pre-trigger[520]: rd.md=0: removing MD RAID activation Sep 9 23:43:18.147467 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:43:18.157800 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:43:18.289458 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:43:18.302555 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 23:43:18.445703 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 23:43:18.445782 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 9 23:43:18.462632 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 9 23:43:18.462714 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 9 23:43:18.469069 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 9 23:43:18.469342 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 9 23:43:18.471644 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:43:18.471889 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:43:18.482448 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 9 23:43:18.483058 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:43:18.498724 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 23:43:18.498799 kernel: GPT:9289727 != 16777215 Sep 9 23:43:18.498826 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 23:43:18.498852 kernel: GPT:9289727 != 16777215 Sep 9 23:43:18.498876 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 23:43:18.485739 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:43:18.516729 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 23:43:18.516767 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:b8:57:29:df:81 Sep 9 23:43:18.500353 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:43:18.516965 (udev-worker)[569]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:43:18.547027 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:43:18.566631 kernel: nvme nvme0: using unchecked data buffer Sep 9 23:43:18.710981 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 9 23:43:18.755002 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 9 23:43:18.760203 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 23:43:18.788199 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 9 23:43:18.827128 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 9 23:43:18.833928 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 9 23:43:18.839383 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:43:18.842131 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:43:18.849887 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:43:18.855373 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 23:43:18.864772 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 23:43:18.888227 disk-uuid[691]: Primary Header is updated. Sep 9 23:43:18.888227 disk-uuid[691]: Secondary Entries is updated. Sep 9 23:43:18.888227 disk-uuid[691]: Secondary Header is updated. Sep 9 23:43:18.906655 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 23:43:18.913660 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:43:19.921615 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 23:43:19.923486 disk-uuid[695]: The operation has completed successfully. Sep 9 23:43:20.099406 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 23:43:20.099639 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 23:43:20.187063 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 23:43:20.210903 sh[960]: Success Sep 9 23:43:20.239802 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 23:43:20.239875 kernel: device-mapper: uevent: version 1.0.3 Sep 9 23:43:20.241825 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 23:43:20.253641 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 23:43:20.354512 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 23:43:20.362163 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 23:43:20.380717 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 23:43:20.401653 kernel: BTRFS: device fsid 2bc16190-0dd5-44d6-b331-3d703f5a1d1f devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (983) Sep 9 23:43:20.405930 kernel: BTRFS info (device dm-0): first mount of filesystem 2bc16190-0dd5-44d6-b331-3d703f5a1d1f Sep 9 23:43:20.405979 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:43:20.553854 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 9 23:43:20.553917 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 23:43:20.553944 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 23:43:20.577923 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 23:43:20.578728 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:43:20.584117 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 23:43:20.585341 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 23:43:20.605738 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 23:43:20.640654 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1006) Sep 9 23:43:20.646407 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:43:20.646472 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:43:20.655964 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 23:43:20.656036 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 23:43:20.664670 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:43:20.669780 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 23:43:20.676461 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 23:43:20.799643 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:43:20.809563 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:43:20.883678 systemd-networkd[1152]: lo: Link UP Sep 9 23:43:20.883691 systemd-networkd[1152]: lo: Gained carrier Sep 9 23:43:20.889471 systemd-networkd[1152]: Enumeration completed Sep 9 23:43:20.889651 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:43:20.890532 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:43:20.890539 systemd-networkd[1152]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:43:20.897345 systemd[1]: Reached target network.target - Network. Sep 9 23:43:20.908829 systemd-networkd[1152]: eth0: Link UP Sep 9 23:43:20.908837 systemd-networkd[1152]: eth0: Gained carrier Sep 9 23:43:20.908859 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:43:20.933704 systemd-networkd[1152]: eth0: DHCPv4 address 172.31.26.206/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 9 23:43:21.285925 ignition[1049]: Ignition 2.21.0 Sep 9 23:43:21.285956 ignition[1049]: Stage: fetch-offline Sep 9 23:43:21.289290 ignition[1049]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:43:21.289328 ignition[1049]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:43:21.293893 ignition[1049]: Ignition finished successfully Sep 9 23:43:21.297562 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:43:21.304807 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 23:43:21.356835 ignition[1163]: Ignition 2.21.0 Sep 9 23:43:21.356867 ignition[1163]: Stage: fetch Sep 9 23:43:21.357549 ignition[1163]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:43:21.358170 ignition[1163]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:43:21.359201 ignition[1163]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:43:21.383841 ignition[1163]: PUT result: OK Sep 9 23:43:21.401985 ignition[1163]: parsed url from cmdline: "" Sep 9 23:43:21.402009 ignition[1163]: no config URL provided Sep 9 23:43:21.402027 ignition[1163]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:43:21.402053 ignition[1163]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:43:21.402090 ignition[1163]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:43:21.406499 ignition[1163]: PUT result: OK Sep 9 23:43:21.408579 ignition[1163]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 9 23:43:21.412414 ignition[1163]: GET result: OK Sep 9 23:43:21.419780 unknown[1163]: fetched base config from "system" Sep 9 23:43:21.412611 ignition[1163]: parsing config with SHA512: 32d37c1edad700ddbf2451acee0c23672c9f9efb08f48eac7026bd35d23d2ce5b995497db6c99948af96cd414b11e30c6058aa56c213f4b54aa95bdec4e722f1 Sep 9 23:43:21.419795 unknown[1163]: fetched base config from "system" Sep 9 23:43:21.420356 ignition[1163]: fetch: fetch complete Sep 9 23:43:21.419808 unknown[1163]: fetched user config from "aws" Sep 9 23:43:21.420367 ignition[1163]: fetch: fetch passed Sep 9 23:43:21.427099 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 23:43:21.420451 ignition[1163]: Ignition finished successfully Sep 9 23:43:21.442559 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 23:43:21.492896 ignition[1170]: Ignition 2.21.0 Sep 9 23:43:21.492928 ignition[1170]: Stage: kargs Sep 9 23:43:21.494211 ignition[1170]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:43:21.494576 ignition[1170]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:43:21.494748 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:43:21.502704 ignition[1170]: PUT result: OK Sep 9 23:43:21.514267 ignition[1170]: kargs: kargs passed Sep 9 23:43:21.514374 ignition[1170]: Ignition finished successfully Sep 9 23:43:21.521380 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 23:43:21.526068 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 23:43:21.583101 ignition[1177]: Ignition 2.21.0 Sep 9 23:43:21.583615 ignition[1177]: Stage: disks Sep 9 23:43:21.584156 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:43:21.584179 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:43:21.589673 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:43:21.595656 ignition[1177]: PUT result: OK Sep 9 23:43:21.601701 ignition[1177]: disks: disks passed Sep 9 23:43:21.601994 ignition[1177]: Ignition finished successfully Sep 9 23:43:21.607390 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 23:43:21.612903 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 23:43:21.615583 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 23:43:21.618298 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:43:21.622665 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:43:21.627217 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:43:21.630609 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 23:43:21.684897 systemd-fsck[1186]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 23:43:21.690391 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 23:43:21.698213 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 23:43:21.833638 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 7cc0d7f3-e4a1-4dc4-8b58-ceece0d874c1 r/w with ordered data mode. Quota mode: none. Sep 9 23:43:21.835251 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 23:43:21.837564 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 23:43:21.840421 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:43:21.843093 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 23:43:21.843704 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 23:43:21.843776 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 23:43:21.843818 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:43:21.884909 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 23:43:21.890786 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 23:43:21.906763 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1205) Sep 9 23:43:21.911088 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:43:21.911146 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:43:21.919861 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 23:43:21.920474 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 23:43:21.922705 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:43:21.971767 systemd-networkd[1152]: eth0: Gained IPv6LL Sep 9 23:43:22.190726 initrd-setup-root[1229]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 23:43:22.219625 initrd-setup-root[1236]: cut: /sysroot/etc/group: No such file or directory Sep 9 23:43:22.228447 initrd-setup-root[1243]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 23:43:22.236437 initrd-setup-root[1250]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 23:43:22.493277 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 23:43:22.497922 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 23:43:22.509436 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 23:43:22.527116 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 23:43:22.533435 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:43:22.576629 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 23:43:22.584457 ignition[1318]: INFO : Ignition 2.21.0 Sep 9 23:43:22.584457 ignition[1318]: INFO : Stage: mount Sep 9 23:43:22.590410 ignition[1318]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:43:22.590410 ignition[1318]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:43:22.590410 ignition[1318]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:43:22.597934 ignition[1318]: INFO : PUT result: OK Sep 9 23:43:22.603446 ignition[1318]: INFO : mount: mount passed Sep 9 23:43:22.605879 ignition[1318]: INFO : Ignition finished successfully Sep 9 23:43:22.608210 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 23:43:22.615289 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 23:43:22.838135 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:43:22.876632 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1330) Sep 9 23:43:22.880898 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:43:22.880958 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:43:22.888014 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 23:43:22.888119 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 23:43:22.891511 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:43:22.937207 ignition[1347]: INFO : Ignition 2.21.0 Sep 9 23:43:22.937207 ignition[1347]: INFO : Stage: files Sep 9 23:43:22.941566 ignition[1347]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:43:22.941566 ignition[1347]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:43:22.941566 ignition[1347]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:43:22.949391 ignition[1347]: INFO : PUT result: OK Sep 9 23:43:22.954098 ignition[1347]: DEBUG : files: compiled without relabeling support, skipping Sep 9 23:43:22.965065 ignition[1347]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 23:43:22.965065 ignition[1347]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 23:43:22.974691 ignition[1347]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 23:43:22.977772 ignition[1347]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 23:43:22.981232 unknown[1347]: wrote ssh authorized keys file for user: core Sep 9 23:43:22.983767 ignition[1347]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 23:43:22.994628 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 23:43:22.994628 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 9 23:43:23.072914 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 23:43:23.358421 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 23:43:23.362621 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 23:43:23.366580 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 23:43:23.370284 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:43:23.374155 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:43:23.377897 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:43:23.381778 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:43:23.385556 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:43:23.389520 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:43:23.397925 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:43:23.401787 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:43:23.405725 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:43:23.411236 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:43:23.416922 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:43:23.421659 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 9 23:43:23.859516 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 23:43:24.273508 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:43:24.273508 ignition[1347]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 23:43:24.281239 ignition[1347]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:43:24.281239 ignition[1347]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:43:24.281239 ignition[1347]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 23:43:24.281239 ignition[1347]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 9 23:43:24.281239 ignition[1347]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 23:43:24.299901 ignition[1347]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:43:24.299901 ignition[1347]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:43:24.299901 ignition[1347]: INFO : files: files passed Sep 9 23:43:24.299901 ignition[1347]: INFO : Ignition finished successfully Sep 9 23:43:24.301043 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 23:43:24.303700 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 23:43:24.320706 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 23:43:24.339891 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 23:43:24.340523 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 23:43:24.360009 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:43:24.360009 initrd-setup-root-after-ignition[1377]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:43:24.368012 initrd-setup-root-after-ignition[1381]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:43:24.371723 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:43:24.375971 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 23:43:24.385793 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 23:43:24.455227 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 23:43:24.455425 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 23:43:24.459034 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 23:43:24.463658 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 23:43:24.466444 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 23:43:24.473433 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 23:43:24.519471 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:43:24.523930 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 23:43:24.573331 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:43:24.576039 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:43:24.576324 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 23:43:24.576659 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 23:43:24.576883 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:43:24.578193 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 23:43:24.578542 systemd[1]: Stopped target basic.target - Basic System. Sep 9 23:43:24.579227 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 23:43:24.579603 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:43:24.579956 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 23:43:24.580425 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:43:24.586753 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 23:43:24.587247 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:43:24.587729 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 23:43:24.588064 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 23:43:24.590542 systemd[1]: Stopped target swap.target - Swaps. Sep 9 23:43:24.591206 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 23:43:24.591482 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:43:24.592652 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:43:24.593095 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:43:24.593333 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 23:43:24.612417 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:43:24.617451 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 23:43:24.617786 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 23:43:24.626634 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 23:43:24.627121 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:43:24.635297 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 23:43:24.635619 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 23:43:24.647320 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 23:43:24.653443 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 23:43:24.653971 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:43:24.708857 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 23:43:24.710932 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 23:43:24.711197 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:43:24.715953 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 23:43:24.716202 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:43:24.743055 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 23:43:24.745851 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 23:43:24.747295 ignition[1401]: INFO : Ignition 2.21.0 Sep 9 23:43:24.747295 ignition[1401]: INFO : Stage: umount Sep 9 23:43:24.766044 ignition[1401]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:43:24.766044 ignition[1401]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:43:24.766044 ignition[1401]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:43:24.766044 ignition[1401]: INFO : PUT result: OK Sep 9 23:43:24.775538 ignition[1401]: INFO : umount: umount passed Sep 9 23:43:24.775538 ignition[1401]: INFO : Ignition finished successfully Sep 9 23:43:24.773223 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 23:43:24.788256 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 23:43:24.790372 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 23:43:24.795291 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 23:43:24.795475 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 23:43:24.804303 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 23:43:24.804415 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 23:43:24.807247 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 23:43:24.807324 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 23:43:24.811388 systemd[1]: Stopped target network.target - Network. Sep 9 23:43:24.813775 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 23:43:24.813866 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:43:24.820469 systemd[1]: Stopped target paths.target - Path Units. Sep 9 23:43:24.823016 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 23:43:24.838677 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:43:24.841403 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 23:43:24.848116 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 23:43:24.852468 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 23:43:24.852548 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:43:24.858148 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 23:43:24.858219 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:43:24.860569 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 23:43:24.860686 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 23:43:24.866882 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 23:43:24.866962 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 23:43:24.870179 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 23:43:24.875653 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 23:43:24.878210 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 23:43:24.878823 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 23:43:24.887284 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 23:43:24.887453 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 23:43:24.899274 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 23:43:24.899493 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 23:43:24.915572 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 23:43:24.916204 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 23:43:24.916421 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 23:43:24.931365 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 23:43:24.932977 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 23:43:24.936939 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 23:43:24.937027 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:43:24.945236 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 23:43:24.950439 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 23:43:24.951063 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:43:24.960578 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:43:24.960713 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:43:24.969502 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 23:43:24.969626 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 23:43:24.972508 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 23:43:24.972613 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:43:24.981405 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:43:24.994382 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:43:24.997245 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:43:25.011808 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 23:43:25.020682 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:43:25.024370 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 23:43:25.024450 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 23:43:25.027804 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 23:43:25.027869 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:43:25.035351 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 23:43:25.035435 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:43:25.043538 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 23:43:25.043689 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 23:43:25.050935 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 23:43:25.051023 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:43:25.065314 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 23:43:25.068256 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 23:43:25.068379 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:43:25.081411 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 23:43:25.081689 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:43:25.089382 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:43:25.089470 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:43:25.103186 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 23:43:25.103316 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 23:43:25.103403 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:43:25.104128 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 23:43:25.107934 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 23:43:25.120052 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 23:43:25.120220 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 23:43:25.125262 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 23:43:25.134081 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 23:43:25.173617 systemd[1]: Switching root. Sep 9 23:43:25.240317 systemd-journald[258]: Journal stopped Sep 9 23:43:27.420851 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Sep 9 23:43:27.420977 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 23:43:27.421017 kernel: SELinux: policy capability open_perms=1 Sep 9 23:43:27.421047 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 23:43:27.421082 kernel: SELinux: policy capability always_check_network=0 Sep 9 23:43:27.421113 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 23:43:27.421141 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 23:43:27.421170 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 23:43:27.421199 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 23:43:27.421228 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 23:43:27.421255 kernel: audit: type=1403 audit(1757461405.618:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 23:43:27.421291 systemd[1]: Successfully loaded SELinux policy in 89.828ms. Sep 9 23:43:27.421340 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.524ms. Sep 9 23:43:27.421378 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:43:27.421410 systemd[1]: Detected virtualization amazon. Sep 9 23:43:27.421439 systemd[1]: Detected architecture arm64. Sep 9 23:43:27.421467 systemd[1]: Detected first boot. Sep 9 23:43:27.421498 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:43:27.421528 zram_generator::config[1445]: No configuration found. Sep 9 23:43:27.421561 kernel: NET: Registered PF_VSOCK protocol family Sep 9 23:43:27.421618 systemd[1]: Populated /etc with preset unit settings. Sep 9 23:43:27.421660 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 23:43:27.421693 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 23:43:27.421725 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 23:43:27.421756 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 23:43:27.421785 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 23:43:27.421818 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 23:43:27.421851 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 23:43:27.421882 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 23:43:27.421913 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 23:43:27.421945 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 23:43:27.421974 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 23:43:27.422002 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 23:43:27.422042 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:43:27.422070 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:43:27.422098 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 23:43:27.422128 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 23:43:27.422159 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 23:43:27.422195 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:43:27.422222 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 23:43:27.422253 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:43:27.422283 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:43:27.422322 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 23:43:27.422353 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 23:43:27.422383 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 23:43:27.422411 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 23:43:27.422443 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:43:27.422475 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:43:27.422506 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:43:27.422541 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:43:27.422578 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 23:43:27.422646 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 23:43:27.422678 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 23:43:27.422709 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:43:27.422737 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:43:27.422771 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:43:27.422800 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 23:43:27.422827 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 23:43:27.422855 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 23:43:27.422885 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 23:43:27.422917 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 23:43:27.422947 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 23:43:27.422975 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 23:43:27.423004 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 23:43:27.423036 systemd[1]: Reached target machines.target - Containers. Sep 9 23:43:27.423065 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 23:43:27.423093 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:43:27.423124 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:43:27.423155 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 23:43:27.423186 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:43:27.423215 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:43:27.423245 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:43:27.423277 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 23:43:27.423305 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:43:27.423336 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 23:43:27.423364 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 23:43:27.423392 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 23:43:27.423421 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 23:43:27.423448 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 23:43:27.423477 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:43:27.423509 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:43:27.423540 kernel: loop: module loaded Sep 9 23:43:27.423567 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:43:27.423635 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:43:27.423668 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 23:43:27.423697 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 23:43:27.423731 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:43:27.423761 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 23:43:27.423789 systemd[1]: Stopped verity-setup.service. Sep 9 23:43:27.423821 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 23:43:27.423858 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 23:43:27.423899 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 23:43:27.423931 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 23:43:27.423960 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 23:43:27.423989 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 23:43:27.424019 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:43:27.424049 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 23:43:27.424078 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 23:43:27.424107 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:43:27.424139 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:43:27.424173 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:43:27.424206 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:43:27.424235 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:43:27.424286 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:43:27.424321 kernel: ACPI: bus type drm_connector registered Sep 9 23:43:27.424348 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:43:27.424376 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:43:27.424404 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:43:27.424431 kernel: fuse: init (API version 7.41) Sep 9 23:43:27.424464 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:43:27.424492 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 23:43:27.424522 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 23:43:27.424551 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 23:43:27.424579 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 23:43:27.424649 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:43:27.426696 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 23:43:27.426779 systemd-journald[1531]: Collecting audit messages is disabled. Sep 9 23:43:27.426836 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 23:43:27.426865 systemd-journald[1531]: Journal started Sep 9 23:43:27.426909 systemd-journald[1531]: Runtime Journal (/run/log/journal/ec2d16a042ce8230c8c53ef6796717aa) is 8M, max 75.3M, 67.3M free. Sep 9 23:43:26.727741 systemd[1]: Queued start job for default target multi-user.target. Sep 9 23:43:26.750299 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 9 23:43:26.751121 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 23:43:27.438707 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 23:43:27.445538 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:43:27.459265 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 23:43:27.468880 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 23:43:27.472622 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:43:27.486636 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 23:43:27.486729 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:43:27.499422 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 23:43:27.502788 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:43:27.512349 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:43:27.528656 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 23:43:27.536374 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:43:27.541848 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 23:43:27.554524 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 23:43:27.560779 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 23:43:27.593965 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 23:43:27.600490 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 23:43:27.607380 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 23:43:27.622956 kernel: loop0: detected capacity change from 0 to 211168 Sep 9 23:43:27.625669 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 23:43:27.636141 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 23:43:27.679679 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:43:27.699728 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:43:27.705655 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 23:43:27.731741 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 23:43:27.741290 systemd-journald[1531]: Time spent on flushing to /var/log/journal/ec2d16a042ce8230c8c53ef6796717aa is 58.315ms for 939 entries. Sep 9 23:43:27.741290 systemd-journald[1531]: System Journal (/var/log/journal/ec2d16a042ce8230c8c53ef6796717aa) is 8M, max 195.6M, 187.6M free. Sep 9 23:43:27.815114 systemd-journald[1531]: Received client request to flush runtime journal. Sep 9 23:43:27.815204 kernel: loop1: detected capacity change from 0 to 119320 Sep 9 23:43:27.755439 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 23:43:27.774207 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 23:43:27.779890 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:43:27.825139 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 23:43:27.866787 kernel: loop2: detected capacity change from 0 to 61256 Sep 9 23:43:27.868389 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Sep 9 23:43:27.868429 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Sep 9 23:43:27.880671 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:43:27.926632 kernel: loop3: detected capacity change from 0 to 100608 Sep 9 23:43:28.043643 kernel: loop4: detected capacity change from 0 to 211168 Sep 9 23:43:28.084638 kernel: loop5: detected capacity change from 0 to 119320 Sep 9 23:43:28.113646 kernel: loop6: detected capacity change from 0 to 61256 Sep 9 23:43:28.146642 kernel: loop7: detected capacity change from 0 to 100608 Sep 9 23:43:28.161874 (sd-merge)[1604]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 9 23:43:28.162864 (sd-merge)[1604]: Merged extensions into '/usr'. Sep 9 23:43:28.172652 systemd[1]: Reload requested from client PID 1560 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 23:43:28.172682 systemd[1]: Reloading... Sep 9 23:43:28.366098 ldconfig[1556]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 23:43:28.401639 zram_generator::config[1636]: No configuration found. Sep 9 23:43:28.800289 systemd[1]: Reloading finished in 624 ms. Sep 9 23:43:28.828692 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 23:43:28.831979 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 23:43:28.835646 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 23:43:28.853711 systemd[1]: Starting ensure-sysext.service... Sep 9 23:43:28.859856 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:43:28.866035 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:43:28.904970 systemd[1]: Reload requested from client PID 1683 ('systemctl') (unit ensure-sysext.service)... Sep 9 23:43:28.904993 systemd[1]: Reloading... Sep 9 23:43:28.938441 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Sep 9 23:43:28.947437 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 23:43:28.947511 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 23:43:28.949363 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 23:43:28.950985 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 23:43:28.959178 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 23:43:28.959776 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Sep 9 23:43:28.959925 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Sep 9 23:43:28.983147 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:43:28.983176 systemd-tmpfiles[1684]: Skipping /boot Sep 9 23:43:29.048072 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:43:29.048102 systemd-tmpfiles[1684]: Skipping /boot Sep 9 23:43:29.120332 zram_generator::config[1726]: No configuration found. Sep 9 23:43:29.426827 (udev-worker)[1701]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:43:29.727062 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 23:43:29.727235 systemd[1]: Reloading finished in 821 ms. Sep 9 23:43:29.744691 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:43:29.763107 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:43:29.802122 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:43:29.807478 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 23:43:29.813509 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 23:43:29.821250 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:43:29.828155 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:43:29.838193 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 23:43:29.855353 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:43:29.861186 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:43:29.937068 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:43:29.943237 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:43:29.945831 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:43:29.946053 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:43:29.950093 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:43:29.952679 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:43:30.018060 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 23:43:30.031444 augenrules[1927]: No rules Sep 9 23:43:30.030999 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:43:30.031840 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:43:30.056122 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:43:30.058912 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:43:30.062837 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:43:30.064706 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:43:30.099683 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 23:43:30.112925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:43:30.117772 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:43:30.124968 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:43:30.137621 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:43:30.140544 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:43:30.140800 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:43:30.147108 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 23:43:30.159094 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 23:43:30.167704 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 23:43:30.192219 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 9 23:43:30.211987 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:43:30.216773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:43:30.221192 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:43:30.223628 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:43:30.240238 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 23:43:30.242780 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:43:30.243124 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 23:43:30.248621 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:43:30.264786 systemd[1]: Finished ensure-sysext.service. Sep 9 23:43:30.305560 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:43:30.306026 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:43:30.317788 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:43:30.319211 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:43:30.325669 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 23:43:30.328956 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:43:30.330163 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:43:30.360682 augenrules[1945]: /sbin/augenrules: No change Sep 9 23:43:30.362920 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:43:30.364705 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:43:30.378322 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 23:43:30.386054 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:43:30.386316 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:43:30.391970 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:43:30.410083 augenrules[1973]: No rules Sep 9 23:43:30.415284 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:43:30.415806 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:43:30.534002 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 23:43:30.556732 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:43:30.655147 systemd-networkd[1882]: lo: Link UP Sep 9 23:43:30.655173 systemd-networkd[1882]: lo: Gained carrier Sep 9 23:43:30.657938 systemd-networkd[1882]: Enumeration completed Sep 9 23:43:30.658728 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:43:30.663575 systemd-networkd[1882]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:43:30.663631 systemd-networkd[1882]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:43:30.664282 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 23:43:30.669776 systemd-resolved[1883]: Positive Trust Anchors: Sep 9 23:43:30.670258 systemd-resolved[1883]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:43:30.670328 systemd-resolved[1883]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:43:30.671834 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 23:43:30.677244 systemd-networkd[1882]: eth0: Link UP Sep 9 23:43:30.677529 systemd-networkd[1882]: eth0: Gained carrier Sep 9 23:43:30.677565 systemd-networkd[1882]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:43:30.692729 systemd-networkd[1882]: eth0: DHCPv4 address 172.31.26.206/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 9 23:43:30.693848 systemd-resolved[1883]: Defaulting to hostname 'linux'. Sep 9 23:43:30.700860 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:43:30.704546 systemd[1]: Reached target network.target - Network. Sep 9 23:43:30.706656 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:43:30.709240 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:43:30.711703 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 23:43:30.714430 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 23:43:30.717425 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 23:43:30.719964 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 23:43:30.722689 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 23:43:30.725404 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 23:43:30.725581 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:43:30.727737 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:43:30.732888 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 23:43:30.737859 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 23:43:30.744528 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 23:43:30.747851 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 23:43:30.750662 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 23:43:30.757187 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 23:43:30.759992 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 23:43:30.763937 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 23:43:30.767980 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 23:43:30.771583 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:43:30.773941 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:43:30.776101 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:43:30.776295 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:43:30.778524 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 23:43:30.783804 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 23:43:30.789962 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 23:43:30.795504 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 23:43:30.806884 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 23:43:30.815305 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 23:43:30.817835 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 23:43:30.823708 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 23:43:30.837043 systemd[1]: Started ntpd.service - Network Time Service. Sep 9 23:43:30.849036 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 23:43:30.857167 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 9 23:43:30.866946 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 23:43:30.879505 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 23:43:30.896960 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 23:43:30.901376 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 23:43:30.903256 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 23:43:30.910687 jq[2001]: false Sep 9 23:43:30.912633 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 23:43:30.919762 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 23:43:30.930697 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 23:43:30.934345 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 23:43:30.937735 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 23:43:30.938398 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 23:43:30.939658 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 23:43:31.007214 extend-filesystems[2002]: Found /dev/nvme0n1p6 Sep 9 23:43:31.042332 extend-filesystems[2002]: Found /dev/nvme0n1p9 Sep 9 23:43:31.052057 update_engine[2013]: I20250909 23:43:31.050261 2013 main.cc:92] Flatcar Update Engine starting Sep 9 23:43:31.058794 jq[2017]: true Sep 9 23:43:31.060538 ntpd[2004]: ntpd 4.2.8p17@1.4004-o Tue Sep 9 21:32:21 UTC 2025 (1): Starting Sep 9 23:43:31.064392 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: ntpd 4.2.8p17@1.4004-o Tue Sep 9 21:32:21 UTC 2025 (1): Starting Sep 9 23:43:31.064392 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 9 23:43:31.064392 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: ---------------------------------------------------- Sep 9 23:43:31.064392 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: ntp-4 is maintained by Network Time Foundation, Sep 9 23:43:31.064392 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 9 23:43:31.064392 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: corporation. Support and training for ntp-4 are Sep 9 23:43:31.064392 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: available at https://www.nwtime.org/support Sep 9 23:43:31.064392 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: ---------------------------------------------------- Sep 9 23:43:31.075729 extend-filesystems[2002]: Checking size of /dev/nvme0n1p9 Sep 9 23:43:31.062654 ntpd[2004]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 9 23:43:31.080363 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: proto: precision = 0.096 usec (-23) Sep 9 23:43:31.080363 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: basedate set to 2025-08-28 Sep 9 23:43:31.080363 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: gps base set to 2025-08-31 (week 2382) Sep 9 23:43:31.062686 ntpd[2004]: ---------------------------------------------------- Sep 9 23:43:31.062707 ntpd[2004]: ntp-4 is maintained by Network Time Foundation, Sep 9 23:43:31.062724 ntpd[2004]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 9 23:43:31.062741 ntpd[2004]: corporation. Support and training for ntp-4 are Sep 9 23:43:31.062758 ntpd[2004]: available at https://www.nwtime.org/support Sep 9 23:43:31.062774 ntpd[2004]: ---------------------------------------------------- Sep 9 23:43:31.068314 ntpd[2004]: proto: precision = 0.096 usec (-23) Sep 9 23:43:31.072073 ntpd[2004]: basedate set to 2025-08-28 Sep 9 23:43:31.072108 ntpd[2004]: gps base set to 2025-08-31 (week 2382) Sep 9 23:43:31.081560 ntpd[2004]: Listen and drop on 0 v6wildcard [::]:123 Sep 9 23:43:31.094885 tar[2030]: linux-arm64/LICENSE Sep 9 23:43:31.094885 tar[2030]: linux-arm64/helm Sep 9 23:43:31.085154 (ntainerd)[2031]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 23:43:31.099856 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: Listen and drop on 0 v6wildcard [::]:123 Sep 9 23:43:31.099856 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 9 23:43:31.099856 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: Listen normally on 2 lo 127.0.0.1:123 Sep 9 23:43:31.099856 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: Listen normally on 3 eth0 172.31.26.206:123 Sep 9 23:43:31.099856 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: Listen normally on 4 lo [::1]:123 Sep 9 23:43:31.099856 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: bind(21) AF_INET6 fe80::4b8:57ff:fe29:df81%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 23:43:31.099856 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: unable to create socket on eth0 (5) for fe80::4b8:57ff:fe29:df81%2#123 Sep 9 23:43:31.099856 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: failed to init interface for address fe80::4b8:57ff:fe29:df81%2 Sep 9 23:43:31.099856 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: Listening on routing socket on fd #21 for interface updates Sep 9 23:43:31.083712 ntpd[2004]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 9 23:43:31.085245 ntpd[2004]: Listen normally on 2 lo 127.0.0.1:123 Sep 9 23:43:31.085313 ntpd[2004]: Listen normally on 3 eth0 172.31.26.206:123 Sep 9 23:43:31.085377 ntpd[2004]: Listen normally on 4 lo [::1]:123 Sep 9 23:43:31.085450 ntpd[2004]: bind(21) AF_INET6 fe80::4b8:57ff:fe29:df81%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 23:43:31.085486 ntpd[2004]: unable to create socket on eth0 (5) for fe80::4b8:57ff:fe29:df81%2#123 Sep 9 23:43:31.085512 ntpd[2004]: failed to init interface for address fe80::4b8:57ff:fe29:df81%2 Sep 9 23:43:31.085567 ntpd[2004]: Listening on routing socket on fd #21 for interface updates Sep 9 23:43:31.103234 dbus-daemon[1999]: [system] SELinux support is enabled Sep 9 23:43:31.114136 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 23:43:31.121198 dbus-daemon[1999]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1882 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 9 23:43:31.123311 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 23:43:31.126439 ntpd[2004]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 23:43:31.126503 ntpd[2004]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 23:43:31.126670 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 23:43:31.126670 ntpd[2004]: 9 Sep 23:43:31 ntpd[2004]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 23:43:31.131949 update_engine[2013]: I20250909 23:43:31.129920 2013 update_check_scheduler.cc:74] Next update check in 11m1s Sep 9 23:43:31.144058 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 23:43:31.147266 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 23:43:31.147315 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 23:43:31.151808 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 23:43:31.151851 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 23:43:31.160869 extend-filesystems[2002]: Resized partition /dev/nvme0n1p9 Sep 9 23:43:31.166656 extend-filesystems[2053]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 23:43:31.169562 systemd[1]: Started update-engine.service - Update Engine. Sep 9 23:43:31.176305 dbus-daemon[1999]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 23:43:31.188330 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 9 23:43:31.193694 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 23:43:31.206708 jq[2043]: true Sep 9 23:43:31.203106 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 9 23:43:31.213495 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 23:43:31.268899 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 9 23:43:31.338622 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 9 23:43:31.352901 extend-filesystems[2053]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 9 23:43:31.352901 extend-filesystems[2053]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 23:43:31.352901 extend-filesystems[2053]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 9 23:43:31.365756 extend-filesystems[2002]: Resized filesystem in /dev/nvme0n1p9 Sep 9 23:43:31.383699 coreos-metadata[1998]: Sep 09 23:43:31.380 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 9 23:43:31.392733 coreos-metadata[1998]: Sep 09 23:43:31.391 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 9 23:43:31.400026 coreos-metadata[1998]: Sep 09 23:43:31.393 INFO Fetch successful Sep 9 23:43:31.400026 coreos-metadata[1998]: Sep 09 23:43:31.399 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 9 23:43:31.401471 bash[2082]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:43:31.404255 coreos-metadata[1998]: Sep 09 23:43:31.403 INFO Fetch successful Sep 9 23:43:31.404255 coreos-metadata[1998]: Sep 09 23:43:31.403 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 9 23:43:31.406978 coreos-metadata[1998]: Sep 09 23:43:31.406 INFO Fetch successful Sep 9 23:43:31.406978 coreos-metadata[1998]: Sep 09 23:43:31.406 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 9 23:43:31.409742 coreos-metadata[1998]: Sep 09 23:43:31.409 INFO Fetch successful Sep 9 23:43:31.409742 coreos-metadata[1998]: Sep 09 23:43:31.409 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 9 23:43:31.412500 coreos-metadata[1998]: Sep 09 23:43:31.412 INFO Fetch failed with 404: resource not found Sep 9 23:43:31.412500 coreos-metadata[1998]: Sep 09 23:43:31.412 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 9 23:43:31.416102 coreos-metadata[1998]: Sep 09 23:43:31.413 INFO Fetch successful Sep 9 23:43:31.418758 coreos-metadata[1998]: Sep 09 23:43:31.416 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 9 23:43:31.419674 coreos-metadata[1998]: Sep 09 23:43:31.419 INFO Fetch successful Sep 9 23:43:31.419962 coreos-metadata[1998]: Sep 09 23:43:31.419 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 9 23:43:31.425894 coreos-metadata[1998]: Sep 09 23:43:31.425 INFO Fetch successful Sep 9 23:43:31.425894 coreos-metadata[1998]: Sep 09 23:43:31.425 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 9 23:43:31.427636 coreos-metadata[1998]: Sep 09 23:43:31.427 INFO Fetch successful Sep 9 23:43:31.427636 coreos-metadata[1998]: Sep 09 23:43:31.427 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 9 23:43:31.431618 coreos-metadata[1998]: Sep 09 23:43:31.430 INFO Fetch successful Sep 9 23:43:31.478783 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 23:43:31.479213 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 23:43:31.483662 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 23:43:31.505004 systemd[1]: Starting sshkeys.service... Sep 9 23:43:31.632225 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 9 23:43:31.642452 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 9 23:43:31.724430 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 23:43:31.727528 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 23:43:31.734010 systemd-logind[2011]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 23:43:31.734052 systemd-logind[2011]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 9 23:43:31.735652 systemd-logind[2011]: New seat seat0. Sep 9 23:43:31.745031 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 23:43:31.897263 containerd[2031]: time="2025-09-09T23:43:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 23:43:31.901210 containerd[2031]: time="2025-09-09T23:43:31.901130532Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 23:43:31.931500 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 9 23:43:31.942251 dbus-daemon[1999]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 9 23:43:31.955153 dbus-daemon[1999]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2058 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 9 23:43:31.980319 systemd[1]: Starting polkit.service - Authorization Manager... Sep 9 23:43:32.022761 coreos-metadata[2115]: Sep 09 23:43:32.022 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 9 23:43:32.035134 containerd[2031]: time="2025-09-09T23:43:32.030156357Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.016µs" Sep 9 23:43:32.035250 coreos-metadata[2115]: Sep 09 23:43:32.033 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 9 23:43:32.039214 coreos-metadata[2115]: Sep 09 23:43:32.038 INFO Fetch successful Sep 9 23:43:32.039214 coreos-metadata[2115]: Sep 09 23:43:32.038 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 9 23:43:32.041082 coreos-metadata[2115]: Sep 09 23:43:32.040 INFO Fetch successful Sep 9 23:43:32.042377 containerd[2031]: time="2025-09-09T23:43:32.042103665Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 23:43:32.042377 containerd[2031]: time="2025-09-09T23:43:32.042286461Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 23:43:32.043839 containerd[2031]: time="2025-09-09T23:43:32.042775401Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 23:43:32.043839 containerd[2031]: time="2025-09-09T23:43:32.042845085Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 23:43:32.043839 containerd[2031]: time="2025-09-09T23:43:32.042924345Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:43:32.043839 containerd[2031]: time="2025-09-09T23:43:32.043070049Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:43:32.043839 containerd[2031]: time="2025-09-09T23:43:32.043141233Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:43:32.045862 containerd[2031]: time="2025-09-09T23:43:32.044815053Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:43:32.045862 containerd[2031]: time="2025-09-09T23:43:32.044912001Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:43:32.045862 containerd[2031]: time="2025-09-09T23:43:32.044968737Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:43:32.045862 containerd[2031]: time="2025-09-09T23:43:32.044993673Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 23:43:32.045363 unknown[2115]: wrote ssh authorized keys file for user: core Sep 9 23:43:32.050284 containerd[2031]: time="2025-09-09T23:43:32.047619861Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 23:43:32.055982 containerd[2031]: time="2025-09-09T23:43:32.052837065Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:43:32.055982 containerd[2031]: time="2025-09-09T23:43:32.052935597Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:43:32.055982 containerd[2031]: time="2025-09-09T23:43:32.052963341Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 23:43:32.055982 containerd[2031]: time="2025-09-09T23:43:32.053049561Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 23:43:32.066747 ntpd[2004]: bind(24) AF_INET6 fe80::4b8:57ff:fe29:df81%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 23:43:32.067496 ntpd[2004]: 9 Sep 23:43:32 ntpd[2004]: bind(24) AF_INET6 fe80::4b8:57ff:fe29:df81%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 23:43:32.067496 ntpd[2004]: 9 Sep 23:43:32 ntpd[2004]: unable to create socket on eth0 (6) for fe80::4b8:57ff:fe29:df81%2#123 Sep 9 23:43:32.067496 ntpd[2004]: 9 Sep 23:43:32 ntpd[2004]: failed to init interface for address fe80::4b8:57ff:fe29:df81%2 Sep 9 23:43:32.066802 ntpd[2004]: unable to create socket on eth0 (6) for fe80::4b8:57ff:fe29:df81%2#123 Sep 9 23:43:32.066828 ntpd[2004]: failed to init interface for address fe80::4b8:57ff:fe29:df81%2 Sep 9 23:43:32.076612 containerd[2031]: time="2025-09-09T23:43:32.072930885Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 23:43:32.076612 containerd[2031]: time="2025-09-09T23:43:32.073131093Z" level=info msg="metadata content store policy set" policy=shared Sep 9 23:43:32.084518 containerd[2031]: time="2025-09-09T23:43:32.083445153Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 23:43:32.084518 containerd[2031]: time="2025-09-09T23:43:32.083561313Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 23:43:32.084518 containerd[2031]: time="2025-09-09T23:43:32.083613081Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 23:43:32.084518 containerd[2031]: time="2025-09-09T23:43:32.083649537Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 23:43:32.084518 containerd[2031]: time="2025-09-09T23:43:32.083679057Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 23:43:32.084518 containerd[2031]: time="2025-09-09T23:43:32.083706573Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 23:43:32.084518 containerd[2031]: time="2025-09-09T23:43:32.083734209Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 23:43:32.084518 containerd[2031]: time="2025-09-09T23:43:32.083762193Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 23:43:32.084518 containerd[2031]: time="2025-09-09T23:43:32.083791821Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 23:43:32.084518 containerd[2031]: time="2025-09-09T23:43:32.083817081Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 23:43:32.084518 containerd[2031]: time="2025-09-09T23:43:32.083840793Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 23:43:32.084518 containerd[2031]: time="2025-09-09T23:43:32.083869929Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 23:43:32.084518 containerd[2031]: time="2025-09-09T23:43:32.084111285Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 23:43:32.084518 containerd[2031]: time="2025-09-09T23:43:32.084150381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 23:43:32.085136 containerd[2031]: time="2025-09-09T23:43:32.084184449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 23:43:32.085136 containerd[2031]: time="2025-09-09T23:43:32.084215013Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 23:43:32.085136 containerd[2031]: time="2025-09-09T23:43:32.084285873Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 23:43:32.085136 containerd[2031]: time="2025-09-09T23:43:32.084316521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 23:43:32.085136 containerd[2031]: time="2025-09-09T23:43:32.084351381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 23:43:32.085136 containerd[2031]: time="2025-09-09T23:43:32.084379665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 23:43:32.085136 containerd[2031]: time="2025-09-09T23:43:32.084415185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 23:43:32.085136 containerd[2031]: time="2025-09-09T23:43:32.084445341Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 23:43:32.085136 containerd[2031]: time="2025-09-09T23:43:32.084472701Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 23:43:32.088019 containerd[2031]: time="2025-09-09T23:43:32.087964509Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 23:43:32.091616 containerd[2031]: time="2025-09-09T23:43:32.089642457Z" level=info msg="Start snapshots syncer" Sep 9 23:43:32.091616 containerd[2031]: time="2025-09-09T23:43:32.089739765Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 23:43:32.091616 containerd[2031]: time="2025-09-09T23:43:32.090146625Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 23:43:32.091978 containerd[2031]: time="2025-09-09T23:43:32.090243357Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 23:43:32.091978 containerd[2031]: time="2025-09-09T23:43:32.090386673Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 23:43:32.091978 containerd[2031]: time="2025-09-09T23:43:32.090635781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 23:43:32.091978 containerd[2031]: time="2025-09-09T23:43:32.090694041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 23:43:32.091978 containerd[2031]: time="2025-09-09T23:43:32.090722145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 23:43:32.091978 containerd[2031]: time="2025-09-09T23:43:32.090750345Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 23:43:32.091978 containerd[2031]: time="2025-09-09T23:43:32.090779421Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 23:43:32.091978 containerd[2031]: time="2025-09-09T23:43:32.090805941Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 23:43:32.091978 containerd[2031]: time="2025-09-09T23:43:32.090834813Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 23:43:32.091978 containerd[2031]: time="2025-09-09T23:43:32.090895365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 23:43:32.091978 containerd[2031]: time="2025-09-09T23:43:32.090937665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 23:43:32.091978 containerd[2031]: time="2025-09-09T23:43:32.090968805Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 23:43:32.091978 containerd[2031]: time="2025-09-09T23:43:32.091033473Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:43:32.091978 containerd[2031]: time="2025-09-09T23:43:32.091068213Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:43:32.092643 containerd[2031]: time="2025-09-09T23:43:32.091090977Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:43:32.092643 containerd[2031]: time="2025-09-09T23:43:32.091114857Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:43:32.092643 containerd[2031]: time="2025-09-09T23:43:32.091134969Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 23:43:32.092643 containerd[2031]: time="2025-09-09T23:43:32.091161081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 23:43:32.092643 containerd[2031]: time="2025-09-09T23:43:32.091189413Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 23:43:32.092643 containerd[2031]: time="2025-09-09T23:43:32.091374021Z" level=info msg="runtime interface created" Sep 9 23:43:32.092643 containerd[2031]: time="2025-09-09T23:43:32.091392345Z" level=info msg="created NRI interface" Sep 9 23:43:32.092643 containerd[2031]: time="2025-09-09T23:43:32.091429617Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 23:43:32.092643 containerd[2031]: time="2025-09-09T23:43:32.091460193Z" level=info msg="Connect containerd service" Sep 9 23:43:32.092643 containerd[2031]: time="2025-09-09T23:43:32.091518573Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 23:43:32.111609 containerd[2031]: time="2025-09-09T23:43:32.110235141Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:43:32.143120 sshd_keygen[2037]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 23:43:32.157928 update-ssh-keys[2171]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:43:32.158886 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 9 23:43:32.173647 systemd[1]: Finished sshkeys.service. Sep 9 23:43:32.320419 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 23:43:32.344293 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 23:43:32.352989 systemd[1]: Started sshd@0-172.31.26.206:22-139.178.89.65:56370.service - OpenSSH per-connection server daemon (139.178.89.65:56370). Sep 9 23:43:32.452005 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 23:43:32.455221 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 23:43:32.462288 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 23:43:32.563727 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 23:43:32.572156 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 23:43:32.578237 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 23:43:32.581441 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 23:43:32.607714 containerd[2031]: time="2025-09-09T23:43:32.605763695Z" level=info msg="Start subscribing containerd event" Sep 9 23:43:32.607714 containerd[2031]: time="2025-09-09T23:43:32.605865371Z" level=info msg="Start recovering state" Sep 9 23:43:32.607714 containerd[2031]: time="2025-09-09T23:43:32.606014411Z" level=info msg="Start event monitor" Sep 9 23:43:32.607714 containerd[2031]: time="2025-09-09T23:43:32.606038663Z" level=info msg="Start cni network conf syncer for default" Sep 9 23:43:32.607714 containerd[2031]: time="2025-09-09T23:43:32.606057911Z" level=info msg="Start streaming server" Sep 9 23:43:32.607714 containerd[2031]: time="2025-09-09T23:43:32.606081971Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 23:43:32.607714 containerd[2031]: time="2025-09-09T23:43:32.606099239Z" level=info msg="runtime interface starting up..." Sep 9 23:43:32.607714 containerd[2031]: time="2025-09-09T23:43:32.606114683Z" level=info msg="starting plugins..." Sep 9 23:43:32.607714 containerd[2031]: time="2025-09-09T23:43:32.606143856Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 23:43:32.609820 containerd[2031]: time="2025-09-09T23:43:32.609757668Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 23:43:32.609935 containerd[2031]: time="2025-09-09T23:43:32.609884520Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 23:43:32.612484 containerd[2031]: time="2025-09-09T23:43:32.611792904Z" level=info msg="containerd successfully booted in 0.715231s" Sep 9 23:43:32.611930 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 23:43:32.659953 polkitd[2162]: Started polkitd version 126 Sep 9 23:43:32.674049 locksmithd[2059]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 23:43:32.678959 polkitd[2162]: Loading rules from directory /etc/polkit-1/rules.d Sep 9 23:43:32.679785 polkitd[2162]: Loading rules from directory /run/polkit-1/rules.d Sep 9 23:43:32.679956 polkitd[2162]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 23:43:32.680726 polkitd[2162]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 9 23:43:32.680777 polkitd[2162]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 23:43:32.680855 polkitd[2162]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 9 23:43:32.682965 polkitd[2162]: Finished loading, compiling and executing 2 rules Sep 9 23:43:32.684368 systemd[1]: Started polkit.service - Authorization Manager. Sep 9 23:43:32.690695 dbus-daemon[1999]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 9 23:43:32.691273 polkitd[2162]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 9 23:43:32.711232 systemd-hostnamed[2058]: Hostname set to (transient) Sep 9 23:43:32.711264 systemd-resolved[1883]: System hostname changed to 'ip-172-31-26-206'. Sep 9 23:43:32.724778 systemd-networkd[1882]: eth0: Gained IPv6LL Sep 9 23:43:32.731661 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 23:43:32.735552 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 23:43:32.741272 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 9 23:43:32.751538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:43:32.760413 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 23:43:32.835218 sshd[2212]: Accepted publickey for core from 139.178.89.65 port 56370 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:43:32.842663 sshd-session[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:32.868071 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 23:43:32.873977 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 23:43:32.894571 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 23:43:32.904761 tar[2030]: linux-arm64/README.md Sep 9 23:43:32.926787 systemd-logind[2011]: New session 1 of user core. Sep 9 23:43:32.945808 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 23:43:32.956630 amazon-ssm-agent[2242]: Initializing new seelog logger Sep 9 23:43:32.956630 amazon-ssm-agent[2242]: New Seelog Logger Creation Complete Sep 9 23:43:32.956630 amazon-ssm-agent[2242]: 2025/09/09 23:43:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:32.956630 amazon-ssm-agent[2242]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:32.958100 amazon-ssm-agent[2242]: 2025/09/09 23:43:32 processing appconfig overrides Sep 9 23:43:32.958731 amazon-ssm-agent[2242]: 2025/09/09 23:43:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:32.958948 amazon-ssm-agent[2242]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:32.958948 amazon-ssm-agent[2242]: 2025-09-09 23:43:32.9585 INFO Proxy environment variables: Sep 9 23:43:32.959188 amazon-ssm-agent[2242]: 2025/09/09 23:43:32 processing appconfig overrides Sep 9 23:43:32.962003 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 23:43:32.965462 amazon-ssm-agent[2242]: 2025/09/09 23:43:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:32.965547 amazon-ssm-agent[2242]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:32.965806 amazon-ssm-agent[2242]: 2025/09/09 23:43:32 processing appconfig overrides Sep 9 23:43:32.971534 amazon-ssm-agent[2242]: 2025/09/09 23:43:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:32.971731 amazon-ssm-agent[2242]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:32.971995 amazon-ssm-agent[2242]: 2025/09/09 23:43:32 processing appconfig overrides Sep 9 23:43:32.973225 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 23:43:33.001628 (systemd)[2262]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 23:43:33.010853 systemd-logind[2011]: New session c1 of user core. Sep 9 23:43:33.059297 amazon-ssm-agent[2242]: 2025-09-09 23:43:32.9586 INFO https_proxy: Sep 9 23:43:33.160065 amazon-ssm-agent[2242]: 2025-09-09 23:43:32.9586 INFO http_proxy: Sep 9 23:43:33.258361 amazon-ssm-agent[2242]: 2025-09-09 23:43:32.9586 INFO no_proxy: Sep 9 23:43:33.332182 systemd[2262]: Queued start job for default target default.target. Sep 9 23:43:33.339398 systemd[2262]: Created slice app.slice - User Application Slice. Sep 9 23:43:33.340712 systemd[2262]: Reached target paths.target - Paths. Sep 9 23:43:33.340830 systemd[2262]: Reached target timers.target - Timers. Sep 9 23:43:33.345381 systemd[2262]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 23:43:33.358425 amazon-ssm-agent[2242]: 2025-09-09 23:43:32.9592 INFO Checking if agent identity type OnPrem can be assumed Sep 9 23:43:33.389130 systemd[2262]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 23:43:33.389358 systemd[2262]: Reached target sockets.target - Sockets. Sep 9 23:43:33.389438 systemd[2262]: Reached target basic.target - Basic System. Sep 9 23:43:33.389517 systemd[2262]: Reached target default.target - Main User Target. Sep 9 23:43:33.389575 systemd[2262]: Startup finished in 362ms. Sep 9 23:43:33.390376 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 23:43:33.399897 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 23:43:33.457410 amazon-ssm-agent[2242]: 2025-09-09 23:43:32.9593 INFO Checking if agent identity type EC2 can be assumed Sep 9 23:43:33.556639 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.0797 INFO Agent will take identity from EC2 Sep 9 23:43:33.573458 systemd[1]: Started sshd@1-172.31.26.206:22-139.178.89.65:56376.service - OpenSSH per-connection server daemon (139.178.89.65:56376). Sep 9 23:43:33.656136 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.0877 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 9 23:43:33.669875 amazon-ssm-agent[2242]: 2025/09/09 23:43:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:33.670273 amazon-ssm-agent[2242]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:43:33.670516 amazon-ssm-agent[2242]: 2025/09/09 23:43:33 processing appconfig overrides Sep 9 23:43:33.709434 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.0877 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 9 23:43:33.710126 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.0878 INFO [amazon-ssm-agent] Starting Core Agent Sep 9 23:43:33.710126 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.0878 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 9 23:43:33.710126 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.0878 INFO [Registrar] Starting registrar module Sep 9 23:43:33.710126 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.0924 INFO [EC2Identity] Checking disk for registration info Sep 9 23:43:33.710126 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.0925 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 9 23:43:33.710126 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.0925 INFO [EC2Identity] Generating registration keypair Sep 9 23:43:33.710126 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.6136 INFO [EC2Identity] Checking write access before registering Sep 9 23:43:33.710126 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.6143 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 9 23:43:33.710126 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.6694 INFO [EC2Identity] EC2 registration was successful. Sep 9 23:43:33.710126 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.6695 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 9 23:43:33.710126 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.6696 INFO [CredentialRefresher] credentialRefresher has started Sep 9 23:43:33.710126 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.6697 INFO [CredentialRefresher] Starting credentials refresher loop Sep 9 23:43:33.710126 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.7087 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 9 23:43:33.710126 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.7091 INFO [CredentialRefresher] Credentials ready Sep 9 23:43:33.755204 amazon-ssm-agent[2242]: 2025-09-09 23:43:33.7099 INFO [CredentialRefresher] Next credential rotation will be in 29.9999809659 minutes Sep 9 23:43:33.802191 sshd[2277]: Accepted publickey for core from 139.178.89.65 port 56376 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:43:33.804739 sshd-session[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:33.814729 systemd-logind[2011]: New session 2 of user core. Sep 9 23:43:33.820929 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 23:43:33.954658 sshd[2281]: Connection closed by 139.178.89.65 port 56376 Sep 9 23:43:33.955709 sshd-session[2277]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:33.964223 systemd[1]: sshd@1-172.31.26.206:22-139.178.89.65:56376.service: Deactivated successfully. Sep 9 23:43:33.967559 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 23:43:33.970901 systemd-logind[2011]: Session 2 logged out. Waiting for processes to exit. Sep 9 23:43:33.976298 systemd-logind[2011]: Removed session 2. Sep 9 23:43:33.991563 systemd[1]: Started sshd@2-172.31.26.206:22-139.178.89.65:56380.service - OpenSSH per-connection server daemon (139.178.89.65:56380). Sep 9 23:43:34.185926 sshd[2287]: Accepted publickey for core from 139.178.89.65 port 56380 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:43:34.188372 sshd-session[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:34.198707 systemd-logind[2011]: New session 3 of user core. Sep 9 23:43:34.206947 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 23:43:34.337879 sshd[2290]: Connection closed by 139.178.89.65 port 56380 Sep 9 23:43:34.339268 sshd-session[2287]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:34.347789 systemd[1]: sshd@2-172.31.26.206:22-139.178.89.65:56380.service: Deactivated successfully. Sep 9 23:43:34.351202 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 23:43:34.356860 systemd-logind[2011]: Session 3 logged out. Waiting for processes to exit. Sep 9 23:43:34.359359 systemd-logind[2011]: Removed session 3. Sep 9 23:43:34.580932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:34.589644 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 23:43:34.592901 systemd[1]: Startup finished in 3.702s (kernel) + 8.909s (initrd) + 9.064s (userspace) = 21.677s. Sep 9 23:43:34.595445 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:43:34.737096 amazon-ssm-agent[2242]: 2025-09-09 23:43:34.7369 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 9 23:43:34.838231 amazon-ssm-agent[2242]: 2025-09-09 23:43:34.7418 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2307) started Sep 9 23:43:34.938057 amazon-ssm-agent[2242]: 2025-09-09 23:43:34.7418 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 9 23:43:35.063690 ntpd[2004]: Listen normally on 7 eth0 [fe80::4b8:57ff:fe29:df81%2]:123 Sep 9 23:43:35.064143 ntpd[2004]: 9 Sep 23:43:35 ntpd[2004]: Listen normally on 7 eth0 [fe80::4b8:57ff:fe29:df81%2]:123 Sep 9 23:43:35.604712 kubelet[2300]: E0909 23:43:35.604570 2300 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:43:35.609188 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:43:35.609888 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:43:35.611773 systemd[1]: kubelet.service: Consumed 1.451s CPU time, 259.5M memory peak. Sep 9 23:43:38.540451 systemd-resolved[1883]: Clock change detected. Flushing caches. Sep 9 23:43:44.856945 systemd[1]: Started sshd@3-172.31.26.206:22-139.178.89.65:37880.service - OpenSSH per-connection server daemon (139.178.89.65:37880). Sep 9 23:43:45.051448 sshd[2325]: Accepted publickey for core from 139.178.89.65 port 37880 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:43:45.053769 sshd-session[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:45.062202 systemd-logind[2011]: New session 4 of user core. Sep 9 23:43:45.071429 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 23:43:45.198032 sshd[2328]: Connection closed by 139.178.89.65 port 37880 Sep 9 23:43:45.199357 sshd-session[2325]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:45.205157 systemd-logind[2011]: Session 4 logged out. Waiting for processes to exit. Sep 9 23:43:45.206301 systemd[1]: sshd@3-172.31.26.206:22-139.178.89.65:37880.service: Deactivated successfully. Sep 9 23:43:45.209399 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 23:43:45.213733 systemd-logind[2011]: Removed session 4. Sep 9 23:43:45.236294 systemd[1]: Started sshd@4-172.31.26.206:22-139.178.89.65:37890.service - OpenSSH per-connection server daemon (139.178.89.65:37890). Sep 9 23:43:45.428213 sshd[2334]: Accepted publickey for core from 139.178.89.65 port 37890 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:43:45.430610 sshd-session[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:45.440212 systemd-logind[2011]: New session 5 of user core. Sep 9 23:43:45.446416 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 23:43:45.565986 sshd[2337]: Connection closed by 139.178.89.65 port 37890 Sep 9 23:43:45.565786 sshd-session[2334]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:45.572460 systemd-logind[2011]: Session 5 logged out. Waiting for processes to exit. Sep 9 23:43:45.573054 systemd[1]: sshd@4-172.31.26.206:22-139.178.89.65:37890.service: Deactivated successfully. Sep 9 23:43:45.576202 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 23:43:45.581551 systemd-logind[2011]: Removed session 5. Sep 9 23:43:45.602324 systemd[1]: Started sshd@5-172.31.26.206:22-139.178.89.65:37902.service - OpenSSH per-connection server daemon (139.178.89.65:37902). Sep 9 23:43:45.789299 sshd[2343]: Accepted publickey for core from 139.178.89.65 port 37902 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:43:45.791655 sshd-session[2343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:45.801187 systemd-logind[2011]: New session 6 of user core. Sep 9 23:43:45.807406 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 23:43:45.933249 sshd[2346]: Connection closed by 139.178.89.65 port 37902 Sep 9 23:43:45.933251 sshd-session[2343]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:45.939381 systemd[1]: sshd@5-172.31.26.206:22-139.178.89.65:37902.service: Deactivated successfully. Sep 9 23:43:45.942327 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 23:43:45.946411 systemd-logind[2011]: Session 6 logged out. Waiting for processes to exit. Sep 9 23:43:45.949791 systemd-logind[2011]: Removed session 6. Sep 9 23:43:45.974449 systemd[1]: Started sshd@6-172.31.26.206:22-139.178.89.65:37908.service - OpenSSH per-connection server daemon (139.178.89.65:37908). Sep 9 23:43:46.133959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 23:43:46.136787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:43:46.164920 sshd[2352]: Accepted publickey for core from 139.178.89.65 port 37908 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:43:46.167585 sshd-session[2352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:46.176726 systemd-logind[2011]: New session 7 of user core. Sep 9 23:43:46.187421 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 23:43:46.311953 sudo[2359]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 23:43:46.313231 sudo[2359]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:43:46.331717 sudo[2359]: pam_unix(sudo:session): session closed for user root Sep 9 23:43:46.356317 sshd[2358]: Connection closed by 139.178.89.65 port 37908 Sep 9 23:43:46.357752 sshd-session[2352]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:46.365802 systemd[1]: sshd@6-172.31.26.206:22-139.178.89.65:37908.service: Deactivated successfully. Sep 9 23:43:46.372031 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 23:43:46.376510 systemd-logind[2011]: Session 7 logged out. Waiting for processes to exit. Sep 9 23:43:46.399655 systemd[1]: Started sshd@7-172.31.26.206:22-139.178.89.65:37922.service - OpenSSH per-connection server daemon (139.178.89.65:37922). Sep 9 23:43:46.402770 systemd-logind[2011]: Removed session 7. Sep 9 23:43:46.537521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:46.548613 (kubelet)[2373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:43:46.605774 sshd[2365]: Accepted publickey for core from 139.178.89.65 port 37922 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:43:46.608421 sshd-session[2365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:46.620582 systemd-logind[2011]: New session 8 of user core. Sep 9 23:43:46.625534 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 23:43:46.638089 kubelet[2373]: E0909 23:43:46.637997 2373 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:43:46.645792 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:43:46.646282 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:43:46.648275 systemd[1]: kubelet.service: Consumed 321ms CPU time, 107.7M memory peak. Sep 9 23:43:46.733941 sudo[2382]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 23:43:46.734884 sudo[2382]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:43:46.744313 sudo[2382]: pam_unix(sudo:session): session closed for user root Sep 9 23:43:46.753945 sudo[2381]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 23:43:46.754829 sudo[2381]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:43:46.773966 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:43:46.836264 augenrules[2404]: No rules Sep 9 23:43:46.838692 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:43:46.840279 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:43:46.842763 sudo[2381]: pam_unix(sudo:session): session closed for user root Sep 9 23:43:46.866002 sshd[2379]: Connection closed by 139.178.89.65 port 37922 Sep 9 23:43:46.866792 sshd-session[2365]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:46.873717 systemd[1]: sshd@7-172.31.26.206:22-139.178.89.65:37922.service: Deactivated successfully. Sep 9 23:43:46.876619 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 23:43:46.878581 systemd-logind[2011]: Session 8 logged out. Waiting for processes to exit. Sep 9 23:43:46.881315 systemd-logind[2011]: Removed session 8. Sep 9 23:43:46.901847 systemd[1]: Started sshd@8-172.31.26.206:22-139.178.89.65:37932.service - OpenSSH per-connection server daemon (139.178.89.65:37932). Sep 9 23:43:47.096363 sshd[2413]: Accepted publickey for core from 139.178.89.65 port 37932 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:43:47.098788 sshd-session[2413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:47.106548 systemd-logind[2011]: New session 9 of user core. Sep 9 23:43:47.111340 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 23:43:47.213683 sudo[2417]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 23:43:47.214343 sudo[2417]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:43:47.733751 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 23:43:47.749663 (dockerd)[2434]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 23:43:48.134254 dockerd[2434]: time="2025-09-09T23:43:48.133307563Z" level=info msg="Starting up" Sep 9 23:43:48.136453 dockerd[2434]: time="2025-09-09T23:43:48.136384519Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 23:43:48.162224 dockerd[2434]: time="2025-09-09T23:43:48.161390743Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 23:43:48.240263 dockerd[2434]: time="2025-09-09T23:43:48.240204200Z" level=info msg="Loading containers: start." Sep 9 23:43:48.259159 kernel: Initializing XFRM netlink socket Sep 9 23:43:48.585338 (udev-worker)[2455]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:43:48.661936 systemd-networkd[1882]: docker0: Link UP Sep 9 23:43:48.672398 dockerd[2434]: time="2025-09-09T23:43:48.672232774Z" level=info msg="Loading containers: done." Sep 9 23:43:48.703547 dockerd[2434]: time="2025-09-09T23:43:48.703459774Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 23:43:48.703751 dockerd[2434]: time="2025-09-09T23:43:48.703595062Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 23:43:48.703817 dockerd[2434]: time="2025-09-09T23:43:48.703741486Z" level=info msg="Initializing buildkit" Sep 9 23:43:48.754766 dockerd[2434]: time="2025-09-09T23:43:48.754697266Z" level=info msg="Completed buildkit initialization" Sep 9 23:43:48.772960 dockerd[2434]: time="2025-09-09T23:43:48.772801582Z" level=info msg="Daemon has completed initialization" Sep 9 23:43:48.773373 dockerd[2434]: time="2025-09-09T23:43:48.773183110Z" level=info msg="API listen on /run/docker.sock" Sep 9 23:43:48.773616 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 23:43:49.902945 containerd[2031]: time="2025-09-09T23:43:49.902891640Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 23:43:50.584300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2625755417.mount: Deactivated successfully. Sep 9 23:43:51.943654 containerd[2031]: time="2025-09-09T23:43:51.943569410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:51.946572 containerd[2031]: time="2025-09-09T23:43:51.946508378Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352613" Sep 9 23:43:51.947646 containerd[2031]: time="2025-09-09T23:43:51.947585054Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:51.957782 containerd[2031]: time="2025-09-09T23:43:51.956770346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:51.961287 containerd[2031]: time="2025-09-09T23:43:51.961218134Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 2.058266194s" Sep 9 23:43:51.961440 containerd[2031]: time="2025-09-09T23:43:51.961295966Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 9 23:43:51.964291 containerd[2031]: time="2025-09-09T23:43:51.964202066Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 23:43:53.512024 containerd[2031]: time="2025-09-09T23:43:53.511957250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:53.513741 containerd[2031]: time="2025-09-09T23:43:53.513684398Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536977" Sep 9 23:43:53.515164 containerd[2031]: time="2025-09-09T23:43:53.514529642Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:53.519150 containerd[2031]: time="2025-09-09T23:43:53.518894342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:53.521099 containerd[2031]: time="2025-09-09T23:43:53.521052686Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.55668898s" Sep 9 23:43:53.521271 containerd[2031]: time="2025-09-09T23:43:53.521242574Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 9 23:43:53.522395 containerd[2031]: time="2025-09-09T23:43:53.522341702Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 23:43:54.828866 containerd[2031]: time="2025-09-09T23:43:54.828462676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:54.830818 containerd[2031]: time="2025-09-09T23:43:54.830773384Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292014" Sep 9 23:43:54.831147 containerd[2031]: time="2025-09-09T23:43:54.831096724Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:54.837219 containerd[2031]: time="2025-09-09T23:43:54.837148636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:54.840814 containerd[2031]: time="2025-09-09T23:43:54.840747328Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.318345326s" Sep 9 23:43:54.840814 containerd[2031]: time="2025-09-09T23:43:54.840805672Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 9 23:43:54.841962 containerd[2031]: time="2025-09-09T23:43:54.841436164Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 23:43:56.132442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1640804508.mount: Deactivated successfully. Sep 9 23:43:56.711155 containerd[2031]: time="2025-09-09T23:43:56.710589870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:56.712508 containerd[2031]: time="2025-09-09T23:43:56.712432266Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199959" Sep 9 23:43:56.714054 containerd[2031]: time="2025-09-09T23:43:56.713990190Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:56.718555 containerd[2031]: time="2025-09-09T23:43:56.718486518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:56.720881 containerd[2031]: time="2025-09-09T23:43:56.720808890Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.879325062s" Sep 9 23:43:56.720881 containerd[2031]: time="2025-09-09T23:43:56.720873222Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 9 23:43:56.722089 containerd[2031]: time="2025-09-09T23:43:56.722030850Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 23:43:56.896665 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 23:43:56.899116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:43:57.258017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:57.273165 (kubelet)[2723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:43:57.338391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3597434293.mount: Deactivated successfully. Sep 9 23:43:57.362704 kubelet[2723]: E0909 23:43:57.362646 2723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:43:57.368638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:43:57.368940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:43:57.369708 systemd[1]: kubelet.service: Consumed 299ms CPU time, 105.4M memory peak. Sep 9 23:43:58.614109 containerd[2031]: time="2025-09-09T23:43:58.614027467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:58.616363 containerd[2031]: time="2025-09-09T23:43:58.616304119Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Sep 9 23:43:58.619625 containerd[2031]: time="2025-09-09T23:43:58.619566643Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:58.625948 containerd[2031]: time="2025-09-09T23:43:58.625863607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:58.628183 containerd[2031]: time="2025-09-09T23:43:58.627876895Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.905783609s" Sep 9 23:43:58.628183 containerd[2031]: time="2025-09-09T23:43:58.627937111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 9 23:43:58.630096 containerd[2031]: time="2025-09-09T23:43:58.629115727Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 23:43:59.120063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4064406527.mount: Deactivated successfully. Sep 9 23:43:59.130804 containerd[2031]: time="2025-09-09T23:43:59.130729650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:43:59.133310 containerd[2031]: time="2025-09-09T23:43:59.133243878Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 9 23:43:59.135757 containerd[2031]: time="2025-09-09T23:43:59.135678438Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:43:59.141189 containerd[2031]: time="2025-09-09T23:43:59.140425218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:43:59.142485 containerd[2031]: time="2025-09-09T23:43:59.141773766Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 511.906059ms" Sep 9 23:43:59.142485 containerd[2031]: time="2025-09-09T23:43:59.141829146Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 23:43:59.142854 containerd[2031]: time="2025-09-09T23:43:59.142676022Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 23:43:59.658875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4064437605.mount: Deactivated successfully. Sep 9 23:44:01.854620 containerd[2031]: time="2025-09-09T23:44:01.854561663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:01.856513 containerd[2031]: time="2025-09-09T23:44:01.856453367Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465295" Sep 9 23:44:01.857192 containerd[2031]: time="2025-09-09T23:44:01.857155499Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:01.862373 containerd[2031]: time="2025-09-09T23:44:01.862311539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:01.864724 containerd[2031]: time="2025-09-09T23:44:01.864678383Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.721921493s" Sep 9 23:44:01.864901 containerd[2031]: time="2025-09-09T23:44:01.864869927Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 9 23:44:03.222875 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 9 23:44:07.621355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 23:44:07.625437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:08.049447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:08.061583 (kubelet)[2873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:44:08.135654 kubelet[2873]: E0909 23:44:08.135592 2873 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:44:08.140966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:44:08.141514 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:44:08.142483 systemd[1]: kubelet.service: Consumed 282ms CPU time, 105M memory peak. Sep 9 23:44:11.570730 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:11.571094 systemd[1]: kubelet.service: Consumed 282ms CPU time, 105M memory peak. Sep 9 23:44:11.574767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:11.630299 systemd[1]: Reload requested from client PID 2886 ('systemctl') (unit session-9.scope)... Sep 9 23:44:11.630328 systemd[1]: Reloading... Sep 9 23:44:11.869394 zram_generator::config[2933]: No configuration found. Sep 9 23:44:12.337032 systemd[1]: Reloading finished in 706 ms. Sep 9 23:44:12.432508 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 23:44:12.432895 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 23:44:12.433690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:12.433880 systemd[1]: kubelet.service: Consumed 219ms CPU time, 95M memory peak. Sep 9 23:44:12.437658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:13.226523 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:13.240692 (kubelet)[2993]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:44:13.316843 kubelet[2993]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:44:13.316843 kubelet[2993]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:44:13.316843 kubelet[2993]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:44:13.317386 kubelet[2993]: I0909 23:44:13.316904 2993 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:44:14.774152 kubelet[2993]: I0909 23:44:14.774044 2993 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 23:44:14.777284 kubelet[2993]: I0909 23:44:14.777231 2993 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:44:14.778241 kubelet[2993]: I0909 23:44:14.778212 2993 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 23:44:14.825227 kubelet[2993]: E0909 23:44:14.825173 2993 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.26.206:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.206:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 23:44:14.828448 kubelet[2993]: I0909 23:44:14.828396 2993 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:44:14.844006 kubelet[2993]: I0909 23:44:14.843964 2993 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:44:14.850840 kubelet[2993]: I0909 23:44:14.850787 2993 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:44:14.853347 kubelet[2993]: I0909 23:44:14.853274 2993 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:44:14.853624 kubelet[2993]: I0909 23:44:14.853335 2993 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-206","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:44:14.853795 kubelet[2993]: I0909 23:44:14.853750 2993 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:44:14.853795 kubelet[2993]: I0909 23:44:14.853772 2993 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 23:44:14.855542 kubelet[2993]: I0909 23:44:14.855494 2993 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:44:14.861717 kubelet[2993]: I0909 23:44:14.861518 2993 kubelet.go:480] "Attempting to sync node with API server" Sep 9 23:44:14.861717 kubelet[2993]: I0909 23:44:14.861567 2993 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:44:14.861717 kubelet[2993]: I0909 23:44:14.861610 2993 kubelet.go:386] "Adding apiserver pod source" Sep 9 23:44:14.865138 kubelet[2993]: I0909 23:44:14.864730 2993 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:44:14.870228 kubelet[2993]: E0909 23:44:14.870186 2993 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-206&limit=500&resourceVersion=0\": dial tcp 172.31.26.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 23:44:14.870523 kubelet[2993]: I0909 23:44:14.870498 2993 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:44:14.871832 kubelet[2993]: I0909 23:44:14.871797 2993 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 23:44:14.872211 kubelet[2993]: W0909 23:44:14.872190 2993 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 23:44:14.881158 kubelet[2993]: I0909 23:44:14.879852 2993 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:44:14.881158 kubelet[2993]: I0909 23:44:14.879936 2993 server.go:1289] "Started kubelet" Sep 9 23:44:14.881594 kubelet[2993]: E0909 23:44:14.881529 2993 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.26.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 23:44:14.890160 kubelet[2993]: I0909 23:44:14.890100 2993 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:44:14.891024 kubelet[2993]: I0909 23:44:14.890975 2993 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:44:14.893986 kubelet[2993]: I0909 23:44:14.893911 2993 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:44:14.894575 kubelet[2993]: I0909 23:44:14.894518 2993 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:44:14.896325 kubelet[2993]: E0909 23:44:14.893693 2993 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.206:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.206:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-206.1863c1e1bc8e7dec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-206,UID:ip-172-31-26-206,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-206,},FirstTimestamp:2025-09-09 23:44:14.879882732 +0000 UTC m=+1.632526041,LastTimestamp:2025-09-09 23:44:14.879882732 +0000 UTC m=+1.632526041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-206,}" Sep 9 23:44:14.897179 kubelet[2993]: I0909 23:44:14.897098 2993 factory.go:223] Registration of the systemd container factory successfully Sep 9 23:44:14.897503 kubelet[2993]: I0909 23:44:14.897470 2993 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:44:14.901490 kubelet[2993]: I0909 23:44:14.901435 2993 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:44:14.904575 kubelet[2993]: I0909 23:44:14.904510 2993 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:44:14.904746 kubelet[2993]: E0909 23:44:14.904708 2993 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-206\" not found" Sep 9 23:44:14.906744 kubelet[2993]: I0909 23:44:14.906687 2993 server.go:317] "Adding debug handlers to kubelet server" Sep 9 23:44:14.908847 kubelet[2993]: I0909 23:44:14.908793 2993 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:44:14.908995 kubelet[2993]: I0909 23:44:14.908898 2993 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:44:14.909898 kubelet[2993]: E0909 23:44:14.909831 2993 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 23:44:14.910031 kubelet[2993]: E0909 23:44:14.909982 2993 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-206?timeout=10s\": dial tcp 172.31.26.206:6443: connect: connection refused" interval="200ms" Sep 9 23:44:14.913173 kubelet[2993]: I0909 23:44:14.913014 2993 factory.go:223] Registration of the containerd container factory successfully Sep 9 23:44:14.914340 kubelet[2993]: E0909 23:44:14.914285 2993 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:44:14.934770 kubelet[2993]: I0909 23:44:14.934687 2993 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:44:14.934770 kubelet[2993]: I0909 23:44:14.934763 2993 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:44:14.935029 kubelet[2993]: I0909 23:44:14.934796 2993 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:44:14.938189 kubelet[2993]: I0909 23:44:14.938110 2993 policy_none.go:49] "None policy: Start" Sep 9 23:44:14.938189 kubelet[2993]: I0909 23:44:14.938174 2993 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:44:14.938189 kubelet[2993]: I0909 23:44:14.938199 2993 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:44:14.951402 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 23:44:14.954278 kubelet[2993]: I0909 23:44:14.954209 2993 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 23:44:14.958515 kubelet[2993]: I0909 23:44:14.958457 2993 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 23:44:14.958515 kubelet[2993]: I0909 23:44:14.958502 2993 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 23:44:14.958703 kubelet[2993]: I0909 23:44:14.958534 2993 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:44:14.958703 kubelet[2993]: I0909 23:44:14.958547 2993 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 23:44:14.958703 kubelet[2993]: E0909 23:44:14.958609 2993 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:44:14.961667 kubelet[2993]: E0909 23:44:14.961557 2993 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 23:44:14.972648 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 23:44:14.980416 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 23:44:14.993004 kubelet[2993]: E0909 23:44:14.992965 2993 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 23:44:14.993733 kubelet[2993]: I0909 23:44:14.993611 2993 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:44:14.994909 kubelet[2993]: I0909 23:44:14.993884 2993 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:44:14.995327 kubelet[2993]: I0909 23:44:14.995304 2993 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:44:14.998108 kubelet[2993]: E0909 23:44:14.998074 2993 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:44:14.998551 kubelet[2993]: E0909 23:44:14.998413 2993 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-206\" not found" Sep 9 23:44:15.082767 systemd[1]: Created slice kubepods-burstable-pode2f27cc36ac09d750d45c6a57ddf2ebc.slice - libcontainer container kubepods-burstable-pode2f27cc36ac09d750d45c6a57ddf2ebc.slice. Sep 9 23:44:15.099948 kubelet[2993]: I0909 23:44:15.099839 2993 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-206" Sep 9 23:44:15.102166 kubelet[2993]: E0909 23:44:15.101494 2993 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-206\" not found" node="ip-172-31-26-206" Sep 9 23:44:15.102727 kubelet[2993]: E0909 23:44:15.102689 2993 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.206:6443/api/v1/nodes\": dial tcp 172.31.26.206:6443: connect: connection refused" node="ip-172-31-26-206" Sep 9 23:44:15.108484 systemd[1]: Created slice kubepods-burstable-pod5083d6503fe015ce1d2cc0578fc3c47b.slice - libcontainer container kubepods-burstable-pod5083d6503fe015ce1d2cc0578fc3c47b.slice. Sep 9 23:44:15.109781 kubelet[2993]: I0909 23:44:15.109696 2993 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5083d6503fe015ce1d2cc0578fc3c47b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-206\" (UID: \"5083d6503fe015ce1d2cc0578fc3c47b\") " pod="kube-system/kube-controller-manager-ip-172-31-26-206" Sep 9 23:44:15.110023 kubelet[2993]: I0909 23:44:15.109940 2993 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5083d6503fe015ce1d2cc0578fc3c47b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-206\" (UID: \"5083d6503fe015ce1d2cc0578fc3c47b\") " pod="kube-system/kube-controller-manager-ip-172-31-26-206" Sep 9 23:44:15.110154 kubelet[2993]: I0909 23:44:15.110099 2993 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c65232c32b14988bf9aebbfffafd85b8-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-206\" (UID: \"c65232c32b14988bf9aebbfffafd85b8\") " pod="kube-system/kube-scheduler-ip-172-31-26-206" Sep 9 23:44:15.110285 kubelet[2993]: I0909 23:44:15.110262 2993 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e2f27cc36ac09d750d45c6a57ddf2ebc-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-206\" (UID: \"e2f27cc36ac09d750d45c6a57ddf2ebc\") " pod="kube-system/kube-apiserver-ip-172-31-26-206" Sep 9 23:44:15.110425 kubelet[2993]: I0909 23:44:15.110401 2993 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e2f27cc36ac09d750d45c6a57ddf2ebc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-206\" (UID: \"e2f27cc36ac09d750d45c6a57ddf2ebc\") " pod="kube-system/kube-apiserver-ip-172-31-26-206" Sep 9 23:44:15.111326 kubelet[2993]: I0909 23:44:15.111207 2993 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5083d6503fe015ce1d2cc0578fc3c47b-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-206\" (UID: \"5083d6503fe015ce1d2cc0578fc3c47b\") " pod="kube-system/kube-controller-manager-ip-172-31-26-206" Sep 9 23:44:15.111326 kubelet[2993]: I0909 23:44:15.111283 2993 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5083d6503fe015ce1d2cc0578fc3c47b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-206\" (UID: \"5083d6503fe015ce1d2cc0578fc3c47b\") " pod="kube-system/kube-controller-manager-ip-172-31-26-206" Sep 9 23:44:15.112042 kubelet[2993]: I0909 23:44:15.111906 2993 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e2f27cc36ac09d750d45c6a57ddf2ebc-ca-certs\") pod \"kube-apiserver-ip-172-31-26-206\" (UID: \"e2f27cc36ac09d750d45c6a57ddf2ebc\") " pod="kube-system/kube-apiserver-ip-172-31-26-206" Sep 9 23:44:15.112042 kubelet[2993]: I0909 23:44:15.112001 2993 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5083d6503fe015ce1d2cc0578fc3c47b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-206\" (UID: \"5083d6503fe015ce1d2cc0578fc3c47b\") " pod="kube-system/kube-controller-manager-ip-172-31-26-206" Sep 9 23:44:15.113148 kubelet[2993]: E0909 23:44:15.112360 2993 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-206?timeout=10s\": dial tcp 172.31.26.206:6443: connect: connection refused" interval="400ms" Sep 9 23:44:15.113984 kubelet[2993]: E0909 23:44:15.113934 2993 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-206\" not found" node="ip-172-31-26-206" Sep 9 23:44:15.121342 systemd[1]: Created slice kubepods-burstable-podc65232c32b14988bf9aebbfffafd85b8.slice - libcontainer container kubepods-burstable-podc65232c32b14988bf9aebbfffafd85b8.slice. Sep 9 23:44:15.125027 kubelet[2993]: E0909 23:44:15.124711 2993 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-206\" not found" node="ip-172-31-26-206" Sep 9 23:44:15.305677 kubelet[2993]: I0909 23:44:15.305619 2993 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-206" Sep 9 23:44:15.306332 kubelet[2993]: E0909 23:44:15.306283 2993 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.206:6443/api/v1/nodes\": dial tcp 172.31.26.206:6443: connect: connection refused" node="ip-172-31-26-206" Sep 9 23:44:15.361723 kubelet[2993]: E0909 23:44:15.361452 2993 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.206:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.206:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-206.1863c1e1bc8e7dec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-206,UID:ip-172-31-26-206,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-206,},FirstTimestamp:2025-09-09 23:44:14.879882732 +0000 UTC m=+1.632526041,LastTimestamp:2025-09-09 23:44:14.879882732 +0000 UTC m=+1.632526041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-206,}" Sep 9 23:44:15.404508 containerd[2031]: time="2025-09-09T23:44:15.404429459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-206,Uid:e2f27cc36ac09d750d45c6a57ddf2ebc,Namespace:kube-system,Attempt:0,}" Sep 9 23:44:15.415225 containerd[2031]: time="2025-09-09T23:44:15.415155035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-206,Uid:5083d6503fe015ce1d2cc0578fc3c47b,Namespace:kube-system,Attempt:0,}" Sep 9 23:44:15.427305 containerd[2031]: time="2025-09-09T23:44:15.427243583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-206,Uid:c65232c32b14988bf9aebbfffafd85b8,Namespace:kube-system,Attempt:0,}" Sep 9 23:44:15.472861 containerd[2031]: time="2025-09-09T23:44:15.472765487Z" level=info msg="connecting to shim d14cd7342a2f9ca0ec05dc87762ebfbe0e7c44047aa08540aa67456dc8fbc5b7" address="unix:///run/containerd/s/679a3b43fdad5acf356ec5845b40feb5731913c2174369991d07fcaf80524a70" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:15.473946 containerd[2031]: time="2025-09-09T23:44:15.473763659Z" level=info msg="connecting to shim 8f1844c9be0a608f5719218c25de1e2d73c19615b5d0502d8160841b1db7c976" address="unix:///run/containerd/s/999e16206879968bff1b44f5de1b581a40dc058e2672477452108e720e6fb276" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:15.513321 kubelet[2993]: E0909 23:44:15.513245 2993 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-206?timeout=10s\": dial tcp 172.31.26.206:6443: connect: connection refused" interval="800ms" Sep 9 23:44:15.526857 containerd[2031]: time="2025-09-09T23:44:15.526561775Z" level=info msg="connecting to shim 6a25064ba4c128550325989521521cc6fa6dbebc11ebf26e2720418e2702fbdb" address="unix:///run/containerd/s/622738bb0cbeb08e7fdc14b66fd2258ff7de73984b711a778ac47479606b846c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:15.561671 systemd[1]: Started cri-containerd-d14cd7342a2f9ca0ec05dc87762ebfbe0e7c44047aa08540aa67456dc8fbc5b7.scope - libcontainer container d14cd7342a2f9ca0ec05dc87762ebfbe0e7c44047aa08540aa67456dc8fbc5b7. Sep 9 23:44:15.583761 systemd[1]: Started cri-containerd-8f1844c9be0a608f5719218c25de1e2d73c19615b5d0502d8160841b1db7c976.scope - libcontainer container 8f1844c9be0a608f5719218c25de1e2d73c19615b5d0502d8160841b1db7c976. Sep 9 23:44:15.611663 systemd[1]: Started cri-containerd-6a25064ba4c128550325989521521cc6fa6dbebc11ebf26e2720418e2702fbdb.scope - libcontainer container 6a25064ba4c128550325989521521cc6fa6dbebc11ebf26e2720418e2702fbdb. Sep 9 23:44:15.711954 kubelet[2993]: I0909 23:44:15.711545 2993 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-206" Sep 9 23:44:15.712105 kubelet[2993]: E0909 23:44:15.712030 2993 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.206:6443/api/v1/nodes\": dial tcp 172.31.26.206:6443: connect: connection refused" node="ip-172-31-26-206" Sep 9 23:44:15.739340 containerd[2031]: time="2025-09-09T23:44:15.739265400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-206,Uid:e2f27cc36ac09d750d45c6a57ddf2ebc,Namespace:kube-system,Attempt:0,} returns sandbox id \"d14cd7342a2f9ca0ec05dc87762ebfbe0e7c44047aa08540aa67456dc8fbc5b7\"" Sep 9 23:44:15.754713 containerd[2031]: time="2025-09-09T23:44:15.754656480Z" level=info msg="CreateContainer within sandbox \"d14cd7342a2f9ca0ec05dc87762ebfbe0e7c44047aa08540aa67456dc8fbc5b7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 23:44:15.756440 containerd[2031]: time="2025-09-09T23:44:15.756313212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-206,Uid:5083d6503fe015ce1d2cc0578fc3c47b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f1844c9be0a608f5719218c25de1e2d73c19615b5d0502d8160841b1db7c976\"" Sep 9 23:44:15.764621 containerd[2031]: time="2025-09-09T23:44:15.764484336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-206,Uid:c65232c32b14988bf9aebbfffafd85b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a25064ba4c128550325989521521cc6fa6dbebc11ebf26e2720418e2702fbdb\"" Sep 9 23:44:15.766054 containerd[2031]: time="2025-09-09T23:44:15.765998544Z" level=info msg="CreateContainer within sandbox \"8f1844c9be0a608f5719218c25de1e2d73c19615b5d0502d8160841b1db7c976\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 23:44:15.770095 containerd[2031]: time="2025-09-09T23:44:15.770043708Z" level=info msg="Container 87516bc06888e7de3c2dcc3b75a3d0ec7299acb48eabbdf9a69a474c26e7502b: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:15.775081 containerd[2031]: time="2025-09-09T23:44:15.775012296Z" level=info msg="CreateContainer within sandbox \"6a25064ba4c128550325989521521cc6fa6dbebc11ebf26e2720418e2702fbdb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 23:44:15.784965 containerd[2031]: time="2025-09-09T23:44:15.784662948Z" level=info msg="Container e8e6ed95d727f637816a85a755fba5c421bd2435511c1e3ca11d975d3a237d37: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:15.788406 containerd[2031]: time="2025-09-09T23:44:15.788345521Z" level=info msg="CreateContainer within sandbox \"d14cd7342a2f9ca0ec05dc87762ebfbe0e7c44047aa08540aa67456dc8fbc5b7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"87516bc06888e7de3c2dcc3b75a3d0ec7299acb48eabbdf9a69a474c26e7502b\"" Sep 9 23:44:15.791051 containerd[2031]: time="2025-09-09T23:44:15.790997113Z" level=info msg="StartContainer for \"87516bc06888e7de3c2dcc3b75a3d0ec7299acb48eabbdf9a69a474c26e7502b\"" Sep 9 23:44:15.794677 containerd[2031]: time="2025-09-09T23:44:15.794610865Z" level=info msg="connecting to shim 87516bc06888e7de3c2dcc3b75a3d0ec7299acb48eabbdf9a69a474c26e7502b" address="unix:///run/containerd/s/679a3b43fdad5acf356ec5845b40feb5731913c2174369991d07fcaf80524a70" protocol=ttrpc version=3 Sep 9 23:44:15.797318 containerd[2031]: time="2025-09-09T23:44:15.797251321Z" level=info msg="CreateContainer within sandbox \"8f1844c9be0a608f5719218c25de1e2d73c19615b5d0502d8160841b1db7c976\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e8e6ed95d727f637816a85a755fba5c421bd2435511c1e3ca11d975d3a237d37\"" Sep 9 23:44:15.798319 containerd[2031]: time="2025-09-09T23:44:15.798108877Z" level=info msg="StartContainer for \"e8e6ed95d727f637816a85a755fba5c421bd2435511c1e3ca11d975d3a237d37\"" Sep 9 23:44:15.801788 containerd[2031]: time="2025-09-09T23:44:15.801715177Z" level=info msg="Container 655b5f0f48b3c31e32fba70a3d90ad9c1cb1dcb6626dca6e91b01a58517c3a62: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:15.803690 containerd[2031]: time="2025-09-09T23:44:15.803580697Z" level=info msg="connecting to shim e8e6ed95d727f637816a85a755fba5c421bd2435511c1e3ca11d975d3a237d37" address="unix:///run/containerd/s/999e16206879968bff1b44f5de1b581a40dc058e2672477452108e720e6fb276" protocol=ttrpc version=3 Sep 9 23:44:15.834705 systemd[1]: Started cri-containerd-87516bc06888e7de3c2dcc3b75a3d0ec7299acb48eabbdf9a69a474c26e7502b.scope - libcontainer container 87516bc06888e7de3c2dcc3b75a3d0ec7299acb48eabbdf9a69a474c26e7502b. Sep 9 23:44:15.848759 containerd[2031]: time="2025-09-09T23:44:15.848641309Z" level=info msg="CreateContainer within sandbox \"6a25064ba4c128550325989521521cc6fa6dbebc11ebf26e2720418e2702fbdb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"655b5f0f48b3c31e32fba70a3d90ad9c1cb1dcb6626dca6e91b01a58517c3a62\"" Sep 9 23:44:15.850150 containerd[2031]: time="2025-09-09T23:44:15.850000777Z" level=info msg="StartContainer for \"655b5f0f48b3c31e32fba70a3d90ad9c1cb1dcb6626dca6e91b01a58517c3a62\"" Sep 9 23:44:15.851947 containerd[2031]: time="2025-09-09T23:44:15.851878873Z" level=info msg="connecting to shim 655b5f0f48b3c31e32fba70a3d90ad9c1cb1dcb6626dca6e91b01a58517c3a62" address="unix:///run/containerd/s/622738bb0cbeb08e7fdc14b66fd2258ff7de73984b711a778ac47479606b846c" protocol=ttrpc version=3 Sep 9 23:44:15.878560 systemd[1]: Started cri-containerd-e8e6ed95d727f637816a85a755fba5c421bd2435511c1e3ca11d975d3a237d37.scope - libcontainer container e8e6ed95d727f637816a85a755fba5c421bd2435511c1e3ca11d975d3a237d37. Sep 9 23:44:15.898481 systemd[1]: Started cri-containerd-655b5f0f48b3c31e32fba70a3d90ad9c1cb1dcb6626dca6e91b01a58517c3a62.scope - libcontainer container 655b5f0f48b3c31e32fba70a3d90ad9c1cb1dcb6626dca6e91b01a58517c3a62. Sep 9 23:44:15.974907 kubelet[2993]: E0909 23:44:15.974852 2993 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 23:44:16.017506 containerd[2031]: time="2025-09-09T23:44:16.017363362Z" level=info msg="StartContainer for \"87516bc06888e7de3c2dcc3b75a3d0ec7299acb48eabbdf9a69a474c26e7502b\" returns successfully" Sep 9 23:44:16.020266 kubelet[2993]: E0909 23:44:16.019965 2993 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 23:44:16.070778 containerd[2031]: time="2025-09-09T23:44:16.069857230Z" level=info msg="StartContainer for \"e8e6ed95d727f637816a85a755fba5c421bd2435511c1e3ca11d975d3a237d37\" returns successfully" Sep 9 23:44:16.083606 containerd[2031]: time="2025-09-09T23:44:16.083545438Z" level=info msg="StartContainer for \"655b5f0f48b3c31e32fba70a3d90ad9c1cb1dcb6626dca6e91b01a58517c3a62\" returns successfully" Sep 9 23:44:16.147945 kubelet[2993]: E0909 23:44:16.147778 2993 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-206&limit=500&resourceVersion=0\": dial tcp 172.31.26.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 23:44:16.392241 update_engine[2013]: I20250909 23:44:16.392155 2013 update_attempter.cc:509] Updating boot flags... Sep 9 23:44:16.517232 kubelet[2993]: I0909 23:44:16.517083 2993 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-206" Sep 9 23:44:17.033078 kubelet[2993]: E0909 23:44:17.033029 2993 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-206\" not found" node="ip-172-31-26-206" Sep 9 23:44:17.042409 kubelet[2993]: E0909 23:44:17.041004 2993 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-206\" not found" node="ip-172-31-26-206" Sep 9 23:44:17.048198 kubelet[2993]: E0909 23:44:17.048151 2993 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-206\" not found" node="ip-172-31-26-206" Sep 9 23:44:18.051710 kubelet[2993]: E0909 23:44:18.051662 2993 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-206\" not found" node="ip-172-31-26-206" Sep 9 23:44:18.052891 kubelet[2993]: E0909 23:44:18.052848 2993 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-206\" not found" node="ip-172-31-26-206" Sep 9 23:44:18.054341 kubelet[2993]: E0909 23:44:18.054299 2993 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-206\" not found" node="ip-172-31-26-206" Sep 9 23:44:19.056860 kubelet[2993]: E0909 23:44:19.056811 2993 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-206\" not found" node="ip-172-31-26-206" Sep 9 23:44:19.058234 kubelet[2993]: E0909 23:44:19.058183 2993 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-206\" not found" node="ip-172-31-26-206" Sep 9 23:44:19.059343 kubelet[2993]: E0909 23:44:19.059283 2993 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-206\" not found" node="ip-172-31-26-206" Sep 9 23:44:20.445558 kubelet[2993]: E0909 23:44:20.445501 2993 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-206\" not found" node="ip-172-31-26-206" Sep 9 23:44:20.636592 kubelet[2993]: I0909 23:44:20.636539 2993 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-206" Sep 9 23:44:20.636756 kubelet[2993]: E0909 23:44:20.636597 2993 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-26-206\": node \"ip-172-31-26-206\" not found" Sep 9 23:44:20.705949 kubelet[2993]: I0909 23:44:20.705591 2993 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-206" Sep 9 23:44:20.725775 kubelet[2993]: E0909 23:44:20.725422 2993 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-206\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-26-206" Sep 9 23:44:20.725775 kubelet[2993]: I0909 23:44:20.725468 2993 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-206" Sep 9 23:44:20.732805 kubelet[2993]: E0909 23:44:20.732400 2993 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-206\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-26-206" Sep 9 23:44:20.733004 kubelet[2993]: I0909 23:44:20.732979 2993 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-206" Sep 9 23:44:20.741471 kubelet[2993]: E0909 23:44:20.741415 2993 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-206\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-26-206" Sep 9 23:44:20.880819 kubelet[2993]: I0909 23:44:20.880743 2993 apiserver.go:52] "Watching apiserver" Sep 9 23:44:20.909956 kubelet[2993]: I0909 23:44:20.909902 2993 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:44:23.411034 systemd[1]: Reload requested from client PID 3370 ('systemctl') (unit session-9.scope)... Sep 9 23:44:23.411057 systemd[1]: Reloading... Sep 9 23:44:23.600176 zram_generator::config[3417]: No configuration found. Sep 9 23:44:24.088273 systemd[1]: Reloading finished in 676 ms. Sep 9 23:44:24.136373 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:24.156723 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 23:44:24.157373 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:24.157551 systemd[1]: kubelet.service: Consumed 2.382s CPU time, 127M memory peak. Sep 9 23:44:24.162310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:24.517590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:24.532867 (kubelet)[3474]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:44:24.633597 kubelet[3474]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:44:24.633597 kubelet[3474]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:44:24.633597 kubelet[3474]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:44:24.633597 kubelet[3474]: I0909 23:44:24.633201 3474 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:44:24.652700 kubelet[3474]: I0909 23:44:24.652634 3474 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 23:44:24.652700 kubelet[3474]: I0909 23:44:24.652682 3474 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:44:24.653114 kubelet[3474]: I0909 23:44:24.653063 3474 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 23:44:24.656621 kubelet[3474]: I0909 23:44:24.656582 3474 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 23:44:24.663227 kubelet[3474]: I0909 23:44:24.662682 3474 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:44:24.671696 kubelet[3474]: I0909 23:44:24.671637 3474 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:44:24.678350 kubelet[3474]: I0909 23:44:24.678148 3474 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:44:24.678590 kubelet[3474]: I0909 23:44:24.678541 3474 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:44:24.678838 kubelet[3474]: I0909 23:44:24.678592 3474 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-206","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:44:24.678984 kubelet[3474]: I0909 23:44:24.678853 3474 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:44:24.678984 kubelet[3474]: I0909 23:44:24.678874 3474 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 23:44:24.678984 kubelet[3474]: I0909 23:44:24.678946 3474 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:44:24.678984 kubelet[3474]: I0909 23:44:24.679273 3474 kubelet.go:480] "Attempting to sync node with API server" Sep 9 23:44:24.678984 kubelet[3474]: I0909 23:44:24.679301 3474 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:44:24.678984 kubelet[3474]: I0909 23:44:24.680098 3474 kubelet.go:386] "Adding apiserver pod source" Sep 9 23:44:24.678984 kubelet[3474]: I0909 23:44:24.680151 3474 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:44:24.693144 kubelet[3474]: I0909 23:44:24.692918 3474 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:44:24.695136 kubelet[3474]: I0909 23:44:24.695011 3474 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 23:44:24.719067 kubelet[3474]: I0909 23:44:24.718404 3474 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:44:24.725580 kubelet[3474]: I0909 23:44:24.725523 3474 server.go:1289] "Started kubelet" Sep 9 23:44:24.730637 kubelet[3474]: I0909 23:44:24.730531 3474 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:44:24.746581 kubelet[3474]: I0909 23:44:24.746525 3474 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:44:24.752616 kubelet[3474]: I0909 23:44:24.752580 3474 server.go:317] "Adding debug handlers to kubelet server" Sep 9 23:44:24.764526 kubelet[3474]: I0909 23:44:24.764412 3474 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 23:44:24.765673 kubelet[3474]: I0909 23:44:24.765590 3474 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:44:24.766581 kubelet[3474]: I0909 23:44:24.766533 3474 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 23:44:24.766702 kubelet[3474]: I0909 23:44:24.766589 3474 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 23:44:24.766702 kubelet[3474]: I0909 23:44:24.766625 3474 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:44:24.766702 kubelet[3474]: I0909 23:44:24.766639 3474 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 23:44:24.766845 kubelet[3474]: E0909 23:44:24.766703 3474 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:44:24.767701 kubelet[3474]: I0909 23:44:24.767586 3474 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:44:24.769022 kubelet[3474]: I0909 23:44:24.768976 3474 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:44:24.778197 kubelet[3474]: I0909 23:44:24.778157 3474 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:44:24.782872 kubelet[3474]: I0909 23:44:24.782835 3474 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:44:24.784359 kubelet[3474]: I0909 23:44:24.784318 3474 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:44:24.799045 kubelet[3474]: I0909 23:44:24.799012 3474 factory.go:223] Registration of the containerd container factory successfully Sep 9 23:44:24.800867 kubelet[3474]: I0909 23:44:24.799257 3474 factory.go:223] Registration of the systemd container factory successfully Sep 9 23:44:24.801274 kubelet[3474]: I0909 23:44:24.801235 3474 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:44:24.822891 kubelet[3474]: E0909 23:44:24.822832 3474 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:44:24.867019 kubelet[3474]: E0909 23:44:24.866970 3474 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 23:44:24.924896 kubelet[3474]: I0909 23:44:24.924822 3474 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:44:24.925165 kubelet[3474]: I0909 23:44:24.925068 3474 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:44:24.925165 kubelet[3474]: I0909 23:44:24.925106 3474 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:44:24.925730 kubelet[3474]: I0909 23:44:24.925577 3474 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 23:44:24.925730 kubelet[3474]: I0909 23:44:24.925611 3474 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 23:44:24.925730 kubelet[3474]: I0909 23:44:24.925643 3474 policy_none.go:49] "None policy: Start" Sep 9 23:44:24.925730 kubelet[3474]: I0909 23:44:24.925660 3474 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:44:24.925730 kubelet[3474]: I0909 23:44:24.925680 3474 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:44:24.926225 kubelet[3474]: I0909 23:44:24.926204 3474 state_mem.go:75] "Updated machine memory state" Sep 9 23:44:24.934568 kubelet[3474]: E0909 23:44:24.934512 3474 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 23:44:24.935918 kubelet[3474]: I0909 23:44:24.935892 3474 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:44:24.936996 kubelet[3474]: I0909 23:44:24.936529 3474 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:44:24.938246 kubelet[3474]: I0909 23:44:24.937942 3474 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:44:24.942485 kubelet[3474]: E0909 23:44:24.942448 3474 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:44:25.055238 kubelet[3474]: I0909 23:44:25.054722 3474 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-206" Sep 9 23:44:25.068206 kubelet[3474]: I0909 23:44:25.068169 3474 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-206" Sep 9 23:44:25.071667 kubelet[3474]: I0909 23:44:25.071611 3474 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-206" Sep 9 23:44:25.072006 kubelet[3474]: I0909 23:44:25.069427 3474 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-206" Sep 9 23:44:25.083374 kubelet[3474]: I0909 23:44:25.082665 3474 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-26-206" Sep 9 23:44:25.083515 kubelet[3474]: I0909 23:44:25.083482 3474 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-206" Sep 9 23:44:25.087691 kubelet[3474]: I0909 23:44:25.087647 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c65232c32b14988bf9aebbfffafd85b8-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-206\" (UID: \"c65232c32b14988bf9aebbfffafd85b8\") " pod="kube-system/kube-scheduler-ip-172-31-26-206" Sep 9 23:44:25.088147 kubelet[3474]: I0909 23:44:25.087929 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e2f27cc36ac09d750d45c6a57ddf2ebc-ca-certs\") pod \"kube-apiserver-ip-172-31-26-206\" (UID: \"e2f27cc36ac09d750d45c6a57ddf2ebc\") " pod="kube-system/kube-apiserver-ip-172-31-26-206" Sep 9 23:44:25.088147 kubelet[3474]: I0909 23:44:25.087978 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e2f27cc36ac09d750d45c6a57ddf2ebc-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-206\" (UID: \"e2f27cc36ac09d750d45c6a57ddf2ebc\") " pod="kube-system/kube-apiserver-ip-172-31-26-206" Sep 9 23:44:25.088147 kubelet[3474]: I0909 23:44:25.088027 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e2f27cc36ac09d750d45c6a57ddf2ebc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-206\" (UID: \"e2f27cc36ac09d750d45c6a57ddf2ebc\") " pod="kube-system/kube-apiserver-ip-172-31-26-206" Sep 9 23:44:25.088147 kubelet[3474]: I0909 23:44:25.088066 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5083d6503fe015ce1d2cc0578fc3c47b-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-206\" (UID: \"5083d6503fe015ce1d2cc0578fc3c47b\") " pod="kube-system/kube-controller-manager-ip-172-31-26-206" Sep 9 23:44:25.088147 kubelet[3474]: I0909 23:44:25.088102 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5083d6503fe015ce1d2cc0578fc3c47b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-206\" (UID: \"5083d6503fe015ce1d2cc0578fc3c47b\") " pod="kube-system/kube-controller-manager-ip-172-31-26-206" Sep 9 23:44:25.089385 kubelet[3474]: I0909 23:44:25.088750 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5083d6503fe015ce1d2cc0578fc3c47b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-206\" (UID: \"5083d6503fe015ce1d2cc0578fc3c47b\") " pod="kube-system/kube-controller-manager-ip-172-31-26-206" Sep 9 23:44:25.089385 kubelet[3474]: I0909 23:44:25.088816 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5083d6503fe015ce1d2cc0578fc3c47b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-206\" (UID: \"5083d6503fe015ce1d2cc0578fc3c47b\") " pod="kube-system/kube-controller-manager-ip-172-31-26-206" Sep 9 23:44:25.089385 kubelet[3474]: I0909 23:44:25.088855 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5083d6503fe015ce1d2cc0578fc3c47b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-206\" (UID: \"5083d6503fe015ce1d2cc0578fc3c47b\") " pod="kube-system/kube-controller-manager-ip-172-31-26-206" Sep 9 23:44:25.686666 kubelet[3474]: I0909 23:44:25.686181 3474 apiserver.go:52] "Watching apiserver" Sep 9 23:44:25.783698 kubelet[3474]: I0909 23:44:25.783610 3474 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:44:25.854391 kubelet[3474]: I0909 23:44:25.854356 3474 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-206" Sep 9 23:44:25.881085 kubelet[3474]: E0909 23:44:25.881016 3474 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-206\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-206" Sep 9 23:44:25.948162 kubelet[3474]: I0909 23:44:25.947139 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-206" podStartSLOduration=0.947099435 podStartE2EDuration="947.099435ms" podCreationTimestamp="2025-09-09 23:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:44:25.916050239 +0000 UTC m=+1.373831552" watchObservedRunningTime="2025-09-09 23:44:25.947099435 +0000 UTC m=+1.404880748" Sep 9 23:44:25.974782 kubelet[3474]: I0909 23:44:25.974363 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-206" podStartSLOduration=0.974342351 podStartE2EDuration="974.342351ms" podCreationTimestamp="2025-09-09 23:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:44:25.949823147 +0000 UTC m=+1.407604448" watchObservedRunningTime="2025-09-09 23:44:25.974342351 +0000 UTC m=+1.432123652" Sep 9 23:44:25.974782 kubelet[3474]: I0909 23:44:25.974513 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-206" podStartSLOduration=0.974503067 podStartE2EDuration="974.503067ms" podCreationTimestamp="2025-09-09 23:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:44:25.973990271 +0000 UTC m=+1.431771584" watchObservedRunningTime="2025-09-09 23:44:25.974503067 +0000 UTC m=+1.432284404" Sep 9 23:44:30.074882 kubelet[3474]: I0909 23:44:30.074829 3474 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 23:44:30.076059 containerd[2031]: time="2025-09-09T23:44:30.076008287Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 23:44:30.077466 kubelet[3474]: I0909 23:44:30.076357 3474 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 23:44:30.829678 kubelet[3474]: I0909 23:44:30.828446 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/acf42916-cec6-4155-9f24-8b1b433ab7bc-kube-proxy\") pod \"kube-proxy-d8bs9\" (UID: \"acf42916-cec6-4155-9f24-8b1b433ab7bc\") " pod="kube-system/kube-proxy-d8bs9" Sep 9 23:44:30.829678 kubelet[3474]: I0909 23:44:30.828516 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acf42916-cec6-4155-9f24-8b1b433ab7bc-xtables-lock\") pod \"kube-proxy-d8bs9\" (UID: \"acf42916-cec6-4155-9f24-8b1b433ab7bc\") " pod="kube-system/kube-proxy-d8bs9" Sep 9 23:44:30.829678 kubelet[3474]: I0909 23:44:30.828553 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acf42916-cec6-4155-9f24-8b1b433ab7bc-lib-modules\") pod \"kube-proxy-d8bs9\" (UID: \"acf42916-cec6-4155-9f24-8b1b433ab7bc\") " pod="kube-system/kube-proxy-d8bs9" Sep 9 23:44:30.829678 kubelet[3474]: I0909 23:44:30.828590 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9hml\" (UniqueName: \"kubernetes.io/projected/acf42916-cec6-4155-9f24-8b1b433ab7bc-kube-api-access-m9hml\") pod \"kube-proxy-d8bs9\" (UID: \"acf42916-cec6-4155-9f24-8b1b433ab7bc\") " pod="kube-system/kube-proxy-d8bs9" Sep 9 23:44:30.841569 systemd[1]: Created slice kubepods-besteffort-podacf42916_cec6_4155_9f24_8b1b433ab7bc.slice - libcontainer container kubepods-besteffort-podacf42916_cec6_4155_9f24_8b1b433ab7bc.slice. Sep 9 23:44:30.945460 kubelet[3474]: E0909 23:44:30.945335 3474 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 9 23:44:30.945604 kubelet[3474]: E0909 23:44:30.945471 3474 projected.go:194] Error preparing data for projected volume kube-api-access-m9hml for pod kube-system/kube-proxy-d8bs9: configmap "kube-root-ca.crt" not found Sep 9 23:44:30.945740 kubelet[3474]: E0909 23:44:30.945666 3474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/acf42916-cec6-4155-9f24-8b1b433ab7bc-kube-api-access-m9hml podName:acf42916-cec6-4155-9f24-8b1b433ab7bc nodeName:}" failed. No retries permitted until 2025-09-09 23:44:31.445604624 +0000 UTC m=+6.903385925 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m9hml" (UniqueName: "kubernetes.io/projected/acf42916-cec6-4155-9f24-8b1b433ab7bc-kube-api-access-m9hml") pod "kube-proxy-d8bs9" (UID: "acf42916-cec6-4155-9f24-8b1b433ab7bc") : configmap "kube-root-ca.crt" not found Sep 9 23:44:31.258658 systemd[1]: Created slice kubepods-besteffort-podb2f8dafe_1513_4d90_a3b1_e6ee4d87cdbd.slice - libcontainer container kubepods-besteffort-podb2f8dafe_1513_4d90_a3b1_e6ee4d87cdbd.slice. Sep 9 23:44:31.331994 kubelet[3474]: I0909 23:44:31.331951 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b2f8dafe-1513-4d90-a3b1-e6ee4d87cdbd-var-lib-calico\") pod \"tigera-operator-755d956888-m5xmw\" (UID: \"b2f8dafe-1513-4d90-a3b1-e6ee4d87cdbd\") " pod="tigera-operator/tigera-operator-755d956888-m5xmw" Sep 9 23:44:31.332830 kubelet[3474]: I0909 23:44:31.332631 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmd49\" (UniqueName: \"kubernetes.io/projected/b2f8dafe-1513-4d90-a3b1-e6ee4d87cdbd-kube-api-access-dmd49\") pod \"tigera-operator-755d956888-m5xmw\" (UID: \"b2f8dafe-1513-4d90-a3b1-e6ee4d87cdbd\") " pod="tigera-operator/tigera-operator-755d956888-m5xmw" Sep 9 23:44:31.568246 containerd[2031]: time="2025-09-09T23:44:31.567530331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-m5xmw,Uid:b2f8dafe-1513-4d90-a3b1-e6ee4d87cdbd,Namespace:tigera-operator,Attempt:0,}" Sep 9 23:44:31.609460 containerd[2031]: time="2025-09-09T23:44:31.609401235Z" level=info msg="connecting to shim f3581bfa89603ba00e366c3effc0d5359f7b76e44e9d630f0883c2f548bd426c" address="unix:///run/containerd/s/41c0f7f67f35a733d4fdfdaf80d74cd42d2a42e4f2e8c67774fefec7c7ff7fbc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:31.660137 systemd[1]: Started cri-containerd-f3581bfa89603ba00e366c3effc0d5359f7b76e44e9d630f0883c2f548bd426c.scope - libcontainer container f3581bfa89603ba00e366c3effc0d5359f7b76e44e9d630f0883c2f548bd426c. Sep 9 23:44:31.757918 containerd[2031]: time="2025-09-09T23:44:31.757869136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d8bs9,Uid:acf42916-cec6-4155-9f24-8b1b433ab7bc,Namespace:kube-system,Attempt:0,}" Sep 9 23:44:31.760321 containerd[2031]: time="2025-09-09T23:44:31.760257700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-m5xmw,Uid:b2f8dafe-1513-4d90-a3b1-e6ee4d87cdbd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f3581bfa89603ba00e366c3effc0d5359f7b76e44e9d630f0883c2f548bd426c\"" Sep 9 23:44:31.764824 containerd[2031]: time="2025-09-09T23:44:31.764498728Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 23:44:31.807442 containerd[2031]: time="2025-09-09T23:44:31.807357916Z" level=info msg="connecting to shim 39faf6b78898b54f2f3deaca75b703b4bd79ab35caf31a916a868e40ca7d474a" address="unix:///run/containerd/s/aebbd9c1d9927c1a5773f1cfd91494df72abd0d425105af8ba28c640ff72c401" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:31.851431 systemd[1]: Started cri-containerd-39faf6b78898b54f2f3deaca75b703b4bd79ab35caf31a916a868e40ca7d474a.scope - libcontainer container 39faf6b78898b54f2f3deaca75b703b4bd79ab35caf31a916a868e40ca7d474a. Sep 9 23:44:31.922592 containerd[2031]: time="2025-09-09T23:44:31.922546145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d8bs9,Uid:acf42916-cec6-4155-9f24-8b1b433ab7bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"39faf6b78898b54f2f3deaca75b703b4bd79ab35caf31a916a868e40ca7d474a\"" Sep 9 23:44:31.934527 containerd[2031]: time="2025-09-09T23:44:31.934468613Z" level=info msg="CreateContainer within sandbox \"39faf6b78898b54f2f3deaca75b703b4bd79ab35caf31a916a868e40ca7d474a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 23:44:31.957825 containerd[2031]: time="2025-09-09T23:44:31.957765869Z" level=info msg="Container 5cf36b2e1846aa7057d6aed0ef392cdf9666048669dffac5b246074ec83949fa: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:31.974870 containerd[2031]: time="2025-09-09T23:44:31.974808629Z" level=info msg="CreateContainer within sandbox \"39faf6b78898b54f2f3deaca75b703b4bd79ab35caf31a916a868e40ca7d474a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5cf36b2e1846aa7057d6aed0ef392cdf9666048669dffac5b246074ec83949fa\"" Sep 9 23:44:31.976162 containerd[2031]: time="2025-09-09T23:44:31.976085861Z" level=info msg="StartContainer for \"5cf36b2e1846aa7057d6aed0ef392cdf9666048669dffac5b246074ec83949fa\"" Sep 9 23:44:31.979450 containerd[2031]: time="2025-09-09T23:44:31.979390937Z" level=info msg="connecting to shim 5cf36b2e1846aa7057d6aed0ef392cdf9666048669dffac5b246074ec83949fa" address="unix:///run/containerd/s/aebbd9c1d9927c1a5773f1cfd91494df72abd0d425105af8ba28c640ff72c401" protocol=ttrpc version=3 Sep 9 23:44:32.017466 systemd[1]: Started cri-containerd-5cf36b2e1846aa7057d6aed0ef392cdf9666048669dffac5b246074ec83949fa.scope - libcontainer container 5cf36b2e1846aa7057d6aed0ef392cdf9666048669dffac5b246074ec83949fa. Sep 9 23:44:32.108241 containerd[2031]: time="2025-09-09T23:44:32.107565470Z" level=info msg="StartContainer for \"5cf36b2e1846aa7057d6aed0ef392cdf9666048669dffac5b246074ec83949fa\" returns successfully" Sep 9 23:44:32.866372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2353937582.mount: Deactivated successfully. Sep 9 23:44:33.829341 containerd[2031]: time="2025-09-09T23:44:33.829276002Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:33.830785 containerd[2031]: time="2025-09-09T23:44:33.830667354Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 9 23:44:33.832192 containerd[2031]: time="2025-09-09T23:44:33.831775554Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:33.835933 containerd[2031]: time="2025-09-09T23:44:33.835239258Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:33.836781 containerd[2031]: time="2025-09-09T23:44:33.836721450Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.07154165s" Sep 9 23:44:33.836878 containerd[2031]: time="2025-09-09T23:44:33.836778882Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 9 23:44:33.844534 containerd[2031]: time="2025-09-09T23:44:33.844435398Z" level=info msg="CreateContainer within sandbox \"f3581bfa89603ba00e366c3effc0d5359f7b76e44e9d630f0883c2f548bd426c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 23:44:33.858027 containerd[2031]: time="2025-09-09T23:44:33.857961222Z" level=info msg="Container 5af9df629276f9bab8b9d9c9ad5413624616e9296759bbbca635c08f28c9a764: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:33.869981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1460282048.mount: Deactivated successfully. Sep 9 23:44:33.871447 containerd[2031]: time="2025-09-09T23:44:33.870204666Z" level=info msg="CreateContainer within sandbox \"f3581bfa89603ba00e366c3effc0d5359f7b76e44e9d630f0883c2f548bd426c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5af9df629276f9bab8b9d9c9ad5413624616e9296759bbbca635c08f28c9a764\"" Sep 9 23:44:33.875148 containerd[2031]: time="2025-09-09T23:44:33.874881210Z" level=info msg="StartContainer for \"5af9df629276f9bab8b9d9c9ad5413624616e9296759bbbca635c08f28c9a764\"" Sep 9 23:44:33.881413 containerd[2031]: time="2025-09-09T23:44:33.881289798Z" level=info msg="connecting to shim 5af9df629276f9bab8b9d9c9ad5413624616e9296759bbbca635c08f28c9a764" address="unix:///run/containerd/s/41c0f7f67f35a733d4fdfdaf80d74cd42d2a42e4f2e8c67774fefec7c7ff7fbc" protocol=ttrpc version=3 Sep 9 23:44:33.926709 systemd[1]: Started cri-containerd-5af9df629276f9bab8b9d9c9ad5413624616e9296759bbbca635c08f28c9a764.scope - libcontainer container 5af9df629276f9bab8b9d9c9ad5413624616e9296759bbbca635c08f28c9a764. Sep 9 23:44:33.981850 containerd[2031]: time="2025-09-09T23:44:33.981512755Z" level=info msg="StartContainer for \"5af9df629276f9bab8b9d9c9ad5413624616e9296759bbbca635c08f28c9a764\" returns successfully" Sep 9 23:44:34.787839 kubelet[3474]: I0909 23:44:34.787752 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d8bs9" podStartSLOduration=4.787729447 podStartE2EDuration="4.787729447s" podCreationTimestamp="2025-09-09 23:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:44:32.916517994 +0000 UTC m=+8.374299439" watchObservedRunningTime="2025-09-09 23:44:34.787729447 +0000 UTC m=+10.245510760" Sep 9 23:44:34.922913 kubelet[3474]: I0909 23:44:34.922829 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-m5xmw" podStartSLOduration=1.84823443 podStartE2EDuration="3.922805564s" podCreationTimestamp="2025-09-09 23:44:31 +0000 UTC" firstStartedPulling="2025-09-09 23:44:31.763695172 +0000 UTC m=+7.221476461" lastFinishedPulling="2025-09-09 23:44:33.838266294 +0000 UTC m=+9.296047595" observedRunningTime="2025-09-09 23:44:34.921537788 +0000 UTC m=+10.379319089" watchObservedRunningTime="2025-09-09 23:44:34.922805564 +0000 UTC m=+10.380586865" Sep 9 23:44:40.966387 sudo[2417]: pam_unix(sudo:session): session closed for user root Sep 9 23:44:40.992174 sshd[2416]: Connection closed by 139.178.89.65 port 37932 Sep 9 23:44:40.991071 sshd-session[2413]: pam_unix(sshd:session): session closed for user core Sep 9 23:44:41.000059 systemd[1]: sshd@8-172.31.26.206:22-139.178.89.65:37932.service: Deactivated successfully. Sep 9 23:44:41.006933 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 23:44:41.007449 systemd[1]: session-9.scope: Consumed 13.271s CPU time, 225.5M memory peak. Sep 9 23:44:41.013790 systemd-logind[2011]: Session 9 logged out. Waiting for processes to exit. Sep 9 23:44:41.018917 systemd-logind[2011]: Removed session 9. Sep 9 23:44:53.995582 systemd[1]: Created slice kubepods-besteffort-pod8ecd6298_ee37_41c6_9fab_1fa1296134b1.slice - libcontainer container kubepods-besteffort-pod8ecd6298_ee37_41c6_9fab_1fa1296134b1.slice. Sep 9 23:44:54.001004 kubelet[3474]: I0909 23:44:54.000885 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ecd6298-ee37-41c6-9fab-1fa1296134b1-tigera-ca-bundle\") pod \"calico-typha-5b888b474-8qgdr\" (UID: \"8ecd6298-ee37-41c6-9fab-1fa1296134b1\") " pod="calico-system/calico-typha-5b888b474-8qgdr" Sep 9 23:44:54.001571 kubelet[3474]: I0909 23:44:54.001036 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8ecd6298-ee37-41c6-9fab-1fa1296134b1-typha-certs\") pod \"calico-typha-5b888b474-8qgdr\" (UID: \"8ecd6298-ee37-41c6-9fab-1fa1296134b1\") " pod="calico-system/calico-typha-5b888b474-8qgdr" Sep 9 23:44:54.001571 kubelet[3474]: I0909 23:44:54.001186 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc888\" (UniqueName: \"kubernetes.io/projected/8ecd6298-ee37-41c6-9fab-1fa1296134b1-kube-api-access-wc888\") pod \"calico-typha-5b888b474-8qgdr\" (UID: \"8ecd6298-ee37-41c6-9fab-1fa1296134b1\") " pod="calico-system/calico-typha-5b888b474-8qgdr" Sep 9 23:44:54.308504 containerd[2031]: time="2025-09-09T23:44:54.308339448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b888b474-8qgdr,Uid:8ecd6298-ee37-41c6-9fab-1fa1296134b1,Namespace:calico-system,Attempt:0,}" Sep 9 23:44:54.360757 containerd[2031]: time="2025-09-09T23:44:54.360667920Z" level=info msg="connecting to shim 93edf4bee60e64f4ce405cedceaa4620a7c244fb532d4e00686f413503d9a3ed" address="unix:///run/containerd/s/a78164e6563198d321ab7704335fad589769b767e80219d5acc8194a866ef0c5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:54.384762 systemd[1]: Created slice kubepods-besteffort-poddcc8f847_8300_4ef9_9465_07e5200a17b3.slice - libcontainer container kubepods-besteffort-poddcc8f847_8300_4ef9_9465_07e5200a17b3.slice. Sep 9 23:44:54.406824 kubelet[3474]: I0909 23:44:54.406761 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8f847-8300-4ef9-9465-07e5200a17b3-cni-net-dir\") pod \"calico-node-795hw\" (UID: \"dcc8f847-8300-4ef9-9465-07e5200a17b3\") " pod="calico-system/calico-node-795hw" Sep 9 23:44:54.406985 kubelet[3474]: I0909 23:44:54.406830 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dcc8f847-8300-4ef9-9465-07e5200a17b3-policysync\") pod \"calico-node-795hw\" (UID: \"dcc8f847-8300-4ef9-9465-07e5200a17b3\") " pod="calico-system/calico-node-795hw" Sep 9 23:44:54.406985 kubelet[3474]: I0909 23:44:54.406872 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dcc8f847-8300-4ef9-9465-07e5200a17b3-var-lib-calico\") pod \"calico-node-795hw\" (UID: \"dcc8f847-8300-4ef9-9465-07e5200a17b3\") " pod="calico-system/calico-node-795hw" Sep 9 23:44:54.406985 kubelet[3474]: I0909 23:44:54.406923 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2mfl\" (UniqueName: \"kubernetes.io/projected/dcc8f847-8300-4ef9-9465-07e5200a17b3-kube-api-access-m2mfl\") pod \"calico-node-795hw\" (UID: \"dcc8f847-8300-4ef9-9465-07e5200a17b3\") " pod="calico-system/calico-node-795hw" Sep 9 23:44:54.406985 kubelet[3474]: I0909 23:44:54.406963 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dcc8f847-8300-4ef9-9465-07e5200a17b3-node-certs\") pod \"calico-node-795hw\" (UID: \"dcc8f847-8300-4ef9-9465-07e5200a17b3\") " pod="calico-system/calico-node-795hw" Sep 9 23:44:54.408532 kubelet[3474]: I0909 23:44:54.406999 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8f847-8300-4ef9-9465-07e5200a17b3-cni-log-dir\") pod \"calico-node-795hw\" (UID: \"dcc8f847-8300-4ef9-9465-07e5200a17b3\") " pod="calico-system/calico-node-795hw" Sep 9 23:44:54.408532 kubelet[3474]: I0909 23:44:54.407035 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dcc8f847-8300-4ef9-9465-07e5200a17b3-flexvol-driver-host\") pod \"calico-node-795hw\" (UID: \"dcc8f847-8300-4ef9-9465-07e5200a17b3\") " pod="calico-system/calico-node-795hw" Sep 9 23:44:54.409490 kubelet[3474]: I0909 23:44:54.407111 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dcc8f847-8300-4ef9-9465-07e5200a17b3-var-run-calico\") pod \"calico-node-795hw\" (UID: \"dcc8f847-8300-4ef9-9465-07e5200a17b3\") " pod="calico-system/calico-node-795hw" Sep 9 23:44:54.410403 kubelet[3474]: I0909 23:44:54.409932 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8f847-8300-4ef9-9465-07e5200a17b3-cni-bin-dir\") pod \"calico-node-795hw\" (UID: \"dcc8f847-8300-4ef9-9465-07e5200a17b3\") " pod="calico-system/calico-node-795hw" Sep 9 23:44:54.411443 kubelet[3474]: I0909 23:44:54.411277 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcc8f847-8300-4ef9-9465-07e5200a17b3-tigera-ca-bundle\") pod \"calico-node-795hw\" (UID: \"dcc8f847-8300-4ef9-9465-07e5200a17b3\") " pod="calico-system/calico-node-795hw" Sep 9 23:44:54.411443 kubelet[3474]: I0909 23:44:54.411434 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dcc8f847-8300-4ef9-9465-07e5200a17b3-lib-modules\") pod \"calico-node-795hw\" (UID: \"dcc8f847-8300-4ef9-9465-07e5200a17b3\") " pod="calico-system/calico-node-795hw" Sep 9 23:44:54.411812 kubelet[3474]: I0909 23:44:54.411523 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dcc8f847-8300-4ef9-9465-07e5200a17b3-xtables-lock\") pod \"calico-node-795hw\" (UID: \"dcc8f847-8300-4ef9-9465-07e5200a17b3\") " pod="calico-system/calico-node-795hw" Sep 9 23:44:54.457268 systemd[1]: Started cri-containerd-93edf4bee60e64f4ce405cedceaa4620a7c244fb532d4e00686f413503d9a3ed.scope - libcontainer container 93edf4bee60e64f4ce405cedceaa4620a7c244fb532d4e00686f413503d9a3ed. Sep 9 23:44:54.519686 kubelet[3474]: E0909 23:44:54.519507 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:54.520111 kubelet[3474]: W0909 23:44:54.519785 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:54.521239 kubelet[3474]: E0909 23:44:54.521166 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:54.533895 kubelet[3474]: E0909 23:44:54.533842 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:54.533895 kubelet[3474]: W0909 23:44:54.533902 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:54.534548 kubelet[3474]: E0909 23:44:54.533937 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:54.551410 kubelet[3474]: E0909 23:44:54.551277 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:54.551410 kubelet[3474]: W0909 23:44:54.551315 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:54.551410 kubelet[3474]: E0909 23:44:54.551347 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:54.721477 containerd[2031]: time="2025-09-09T23:44:54.719940986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-795hw,Uid:dcc8f847-8300-4ef9-9465-07e5200a17b3,Namespace:calico-system,Attempt:0,}" Sep 9 23:44:54.759169 containerd[2031]: time="2025-09-09T23:44:54.756970226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b888b474-8qgdr,Uid:8ecd6298-ee37-41c6-9fab-1fa1296134b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"93edf4bee60e64f4ce405cedceaa4620a7c244fb532d4e00686f413503d9a3ed\"" Sep 9 23:44:54.766438 containerd[2031]: time="2025-09-09T23:44:54.766285586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 23:44:54.797093 containerd[2031]: time="2025-09-09T23:44:54.797036774Z" level=info msg="connecting to shim 567c4260ea6adf3a1c4f8496516aa6171e0ac78a41a79ac845ff699ac545c5f0" address="unix:///run/containerd/s/37498f60e2d6cb99ab88e265605557a277aff85e17930538f33b472e10b34dd8" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:54.880528 systemd[1]: Started cri-containerd-567c4260ea6adf3a1c4f8496516aa6171e0ac78a41a79ac845ff699ac545c5f0.scope - libcontainer container 567c4260ea6adf3a1c4f8496516aa6171e0ac78a41a79ac845ff699ac545c5f0. Sep 9 23:44:54.963593 kubelet[3474]: E0909 23:44:54.962874 3474 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cs7p" podUID="f02a0402-c726-48be-86a1-a888ea61e0f5" Sep 9 23:44:54.984505 kubelet[3474]: E0909 23:44:54.983811 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:54.984505 kubelet[3474]: W0909 23:44:54.983850 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:54.984505 kubelet[3474]: E0909 23:44:54.983883 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:54.987893 kubelet[3474]: E0909 23:44:54.987846 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:54.988046 kubelet[3474]: W0909 23:44:54.987882 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:54.988046 kubelet[3474]: E0909 23:44:54.987958 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:54.991305 kubelet[3474]: E0909 23:44:54.991256 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:54.991305 kubelet[3474]: W0909 23:44:54.991298 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:54.991305 kubelet[3474]: E0909 23:44:54.991330 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:54.996520 kubelet[3474]: E0909 23:44:54.994469 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:54.996520 kubelet[3474]: W0909 23:44:54.996195 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:54.996520 kubelet[3474]: E0909 23:44:54.996268 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:54.997254 kubelet[3474]: E0909 23:44:54.997224 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:54.998336 kubelet[3474]: W0909 23:44:54.998301 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.000292 kubelet[3474]: E0909 23:44:54.999484 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.001653 kubelet[3474]: E0909 23:44:55.001598 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.003850 kubelet[3474]: W0909 23:44:55.003187 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.003850 kubelet[3474]: E0909 23:44:55.003263 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.004763 kubelet[3474]: E0909 23:44:55.004613 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.005277 kubelet[3474]: W0909 23:44:55.004944 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.005277 kubelet[3474]: E0909 23:44:55.004985 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.010984 kubelet[3474]: E0909 23:44:55.010482 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.011729 kubelet[3474]: W0909 23:44:55.011373 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.011729 kubelet[3474]: E0909 23:44:55.011675 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.020364 kubelet[3474]: E0909 23:44:55.019615 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.020517 kubelet[3474]: W0909 23:44:55.020364 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.020517 kubelet[3474]: E0909 23:44:55.020410 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.022318 kubelet[3474]: E0909 23:44:55.022112 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.022318 kubelet[3474]: W0909 23:44:55.022178 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.022318 kubelet[3474]: E0909 23:44:55.022211 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.025094 kubelet[3474]: E0909 23:44:55.025047 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.025419 kubelet[3474]: W0909 23:44:55.025082 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.025419 kubelet[3474]: E0909 23:44:55.025337 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.028155 kubelet[3474]: E0909 23:44:55.026703 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.028155 kubelet[3474]: W0909 23:44:55.026738 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.028155 kubelet[3474]: E0909 23:44:55.026968 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.028430 kubelet[3474]: E0909 23:44:55.028238 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.028430 kubelet[3474]: W0909 23:44:55.028306 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.028530 kubelet[3474]: E0909 23:44:55.028453 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.032473 kubelet[3474]: E0909 23:44:55.032418 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.033231 kubelet[3474]: W0909 23:44:55.032464 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.033231 kubelet[3474]: E0909 23:44:55.033115 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.035547 kubelet[3474]: E0909 23:44:55.035496 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.036773 kubelet[3474]: W0909 23:44:55.035941 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.036773 kubelet[3474]: E0909 23:44:55.035983 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.037523 kubelet[3474]: E0909 23:44:55.037072 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.037523 kubelet[3474]: W0909 23:44:55.037099 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.037523 kubelet[3474]: E0909 23:44:55.037148 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.041160 kubelet[3474]: E0909 23:44:55.039936 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.041372 kubelet[3474]: W0909 23:44:55.041334 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.041523 kubelet[3474]: E0909 23:44:55.041499 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.042288 kubelet[3474]: E0909 23:44:55.042240 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.042687 kubelet[3474]: W0909 23:44:55.042654 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.043216 kubelet[3474]: E0909 23:44:55.043184 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.044571 kubelet[3474]: E0909 23:44:55.044535 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.045141 kubelet[3474]: W0909 23:44:55.044720 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.045141 kubelet[3474]: E0909 23:44:55.044758 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.046482 kubelet[3474]: E0909 23:44:55.046444 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.046712 kubelet[3474]: W0909 23:44:55.046653 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.047471 kubelet[3474]: E0909 23:44:55.046948 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.049854 kubelet[3474]: E0909 23:44:55.049099 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.050067 kubelet[3474]: W0909 23:44:55.050013 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.050229 kubelet[3474]: E0909 23:44:55.050204 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.051161 kubelet[3474]: I0909 23:44:55.050785 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6l8h\" (UniqueName: \"kubernetes.io/projected/f02a0402-c726-48be-86a1-a888ea61e0f5-kube-api-access-n6l8h\") pod \"csi-node-driver-7cs7p\" (UID: \"f02a0402-c726-48be-86a1-a888ea61e0f5\") " pod="calico-system/csi-node-driver-7cs7p" Sep 9 23:44:55.052054 kubelet[3474]: E0909 23:44:55.051995 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.052054 kubelet[3474]: W0909 23:44:55.052036 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.052324 kubelet[3474]: E0909 23:44:55.052070 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.053584 kubelet[3474]: E0909 23:44:55.053534 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.053584 kubelet[3474]: W0909 23:44:55.053573 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.053778 kubelet[3474]: E0909 23:44:55.053605 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.055586 kubelet[3474]: E0909 23:44:55.055530 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.055586 kubelet[3474]: W0909 23:44:55.055685 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.055586 kubelet[3474]: E0909 23:44:55.055719 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.055956 kubelet[3474]: I0909 23:44:55.055771 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f02a0402-c726-48be-86a1-a888ea61e0f5-socket-dir\") pod \"csi-node-driver-7cs7p\" (UID: \"f02a0402-c726-48be-86a1-a888ea61e0f5\") " pod="calico-system/csi-node-driver-7cs7p" Sep 9 23:44:55.056912 kubelet[3474]: E0909 23:44:55.056860 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.057996 kubelet[3474]: W0909 23:44:55.057176 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.057996 kubelet[3474]: E0909 23:44:55.057245 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.057996 kubelet[3474]: I0909 23:44:55.057309 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f02a0402-c726-48be-86a1-a888ea61e0f5-kubelet-dir\") pod \"csi-node-driver-7cs7p\" (UID: \"f02a0402-c726-48be-86a1-a888ea61e0f5\") " pod="calico-system/csi-node-driver-7cs7p" Sep 9 23:44:55.058875 kubelet[3474]: E0909 23:44:55.058754 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.059331 kubelet[3474]: W0909 23:44:55.059104 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.060000 kubelet[3474]: E0909 23:44:55.059965 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.061596 kubelet[3474]: E0909 23:44:55.061310 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.061596 kubelet[3474]: W0909 23:44:55.061346 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.061596 kubelet[3474]: E0909 23:44:55.061377 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.062799 kubelet[3474]: E0909 23:44:55.062767 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.062952 kubelet[3474]: W0909 23:44:55.062927 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.063368 kubelet[3474]: E0909 23:44:55.063300 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.064311 kubelet[3474]: I0909 23:44:55.064200 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f02a0402-c726-48be-86a1-a888ea61e0f5-registration-dir\") pod \"csi-node-driver-7cs7p\" (UID: \"f02a0402-c726-48be-86a1-a888ea61e0f5\") " pod="calico-system/csi-node-driver-7cs7p" Sep 9 23:44:55.065897 kubelet[3474]: E0909 23:44:55.065110 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.065897 kubelet[3474]: W0909 23:44:55.065383 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.065897 kubelet[3474]: E0909 23:44:55.065418 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.066703 kubelet[3474]: E0909 23:44:55.066582 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.069237 kubelet[3474]: W0909 23:44:55.069184 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.069456 kubelet[3474]: E0909 23:44:55.069432 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.070986 kubelet[3474]: E0909 23:44:55.070596 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.070986 kubelet[3474]: W0909 23:44:55.070627 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.070986 kubelet[3474]: E0909 23:44:55.070654 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.070986 kubelet[3474]: I0909 23:44:55.070722 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f02a0402-c726-48be-86a1-a888ea61e0f5-varrun\") pod \"csi-node-driver-7cs7p\" (UID: \"f02a0402-c726-48be-86a1-a888ea61e0f5\") " pod="calico-system/csi-node-driver-7cs7p" Sep 9 23:44:55.072481 kubelet[3474]: E0909 23:44:55.072155 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.072481 kubelet[3474]: W0909 23:44:55.072188 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.072481 kubelet[3474]: E0909 23:44:55.072218 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.074146 kubelet[3474]: E0909 23:44:55.073079 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.074146 kubelet[3474]: W0909 23:44:55.074010 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.074146 kubelet[3474]: E0909 23:44:55.074055 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.077466 kubelet[3474]: E0909 23:44:55.077430 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.077953 kubelet[3474]: W0909 23:44:55.077658 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.077953 kubelet[3474]: E0909 23:44:55.077714 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.080105 kubelet[3474]: E0909 23:44:55.079568 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.080105 kubelet[3474]: W0909 23:44:55.079724 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.080105 kubelet[3474]: E0909 23:44:55.079758 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.085843 containerd[2031]: time="2025-09-09T23:44:55.085750728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-795hw,Uid:dcc8f847-8300-4ef9-9465-07e5200a17b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"567c4260ea6adf3a1c4f8496516aa6171e0ac78a41a79ac845ff699ac545c5f0\"" Sep 9 23:44:55.174504 kubelet[3474]: E0909 23:44:55.173271 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.174504 kubelet[3474]: W0909 23:44:55.174201 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.174504 kubelet[3474]: E0909 23:44:55.174250 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.174864 kubelet[3474]: E0909 23:44:55.174838 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.175859 kubelet[3474]: W0909 23:44:55.174949 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.176106 kubelet[3474]: E0909 23:44:55.176071 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.177296 kubelet[3474]: E0909 23:44:55.176672 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.178353 kubelet[3474]: W0909 23:44:55.178304 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.178532 kubelet[3474]: E0909 23:44:55.178507 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.181003 kubelet[3474]: E0909 23:44:55.180657 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.181003 kubelet[3474]: W0909 23:44:55.180691 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.181003 kubelet[3474]: E0909 23:44:55.180721 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.182762 kubelet[3474]: E0909 23:44:55.182723 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.184427 kubelet[3474]: W0909 23:44:55.184336 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.184712 kubelet[3474]: E0909 23:44:55.184388 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.185499 kubelet[3474]: E0909 23:44:55.185464 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.185918 kubelet[3474]: W0909 23:44:55.185676 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.185918 kubelet[3474]: E0909 23:44:55.185719 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.187286 kubelet[3474]: E0909 23:44:55.187250 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.188226 kubelet[3474]: W0909 23:44:55.187965 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.188226 kubelet[3474]: E0909 23:44:55.188010 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.188693 kubelet[3474]: E0909 23:44:55.188667 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.189064 kubelet[3474]: W0909 23:44:55.188808 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.189064 kubelet[3474]: E0909 23:44:55.188847 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.189515 kubelet[3474]: E0909 23:44:55.189482 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.189608 kubelet[3474]: W0909 23:44:55.189515 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.189608 kubelet[3474]: E0909 23:44:55.189543 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.191331 kubelet[3474]: E0909 23:44:55.191282 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.191331 kubelet[3474]: W0909 23:44:55.191321 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.191731 kubelet[3474]: E0909 23:44:55.191355 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.191933 kubelet[3474]: E0909 23:44:55.191895 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.191999 kubelet[3474]: W0909 23:44:55.191969 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.192077 kubelet[3474]: E0909 23:44:55.191997 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.194331 kubelet[3474]: E0909 23:44:55.194281 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.194331 kubelet[3474]: W0909 23:44:55.194320 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.194707 kubelet[3474]: E0909 23:44:55.194353 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.194916 kubelet[3474]: E0909 23:44:55.194879 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.194916 kubelet[3474]: W0909 23:44:55.194911 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.195070 kubelet[3474]: E0909 23:44:55.194936 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.195388 kubelet[3474]: E0909 23:44:55.195352 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.195388 kubelet[3474]: W0909 23:44:55.195381 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.195626 kubelet[3474]: E0909 23:44:55.195407 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.196393 kubelet[3474]: E0909 23:44:55.196343 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.196393 kubelet[3474]: W0909 23:44:55.196383 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.196702 kubelet[3474]: E0909 23:44:55.196414 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.197744 kubelet[3474]: E0909 23:44:55.197694 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.197744 kubelet[3474]: W0909 23:44:55.197732 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.198036 kubelet[3474]: E0909 23:44:55.197765 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.198502 kubelet[3474]: E0909 23:44:55.198457 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.198502 kubelet[3474]: W0909 23:44:55.198492 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.198717 kubelet[3474]: E0909 23:44:55.198522 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.199519 kubelet[3474]: E0909 23:44:55.199350 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.199519 kubelet[3474]: W0909 23:44:55.199510 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.199789 kubelet[3474]: E0909 23:44:55.199543 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.200718 kubelet[3474]: E0909 23:44:55.200669 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.200718 kubelet[3474]: W0909 23:44:55.200707 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.200971 kubelet[3474]: E0909 23:44:55.200739 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.201432 kubelet[3474]: E0909 23:44:55.201390 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.201432 kubelet[3474]: W0909 23:44:55.201424 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.201932 kubelet[3474]: E0909 23:44:55.201454 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.202417 kubelet[3474]: E0909 23:44:55.202374 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.202417 kubelet[3474]: W0909 23:44:55.202409 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.203297 kubelet[3474]: E0909 23:44:55.202440 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.203541 kubelet[3474]: E0909 23:44:55.203501 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.203541 kubelet[3474]: W0909 23:44:55.203536 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.203678 kubelet[3474]: E0909 23:44:55.203566 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.205375 kubelet[3474]: E0909 23:44:55.205322 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.205375 kubelet[3474]: W0909 23:44:55.205363 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.205559 kubelet[3474]: E0909 23:44:55.205397 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.205869 kubelet[3474]: E0909 23:44:55.205835 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.205869 kubelet[3474]: W0909 23:44:55.205863 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.207242 kubelet[3474]: E0909 23:44:55.205888 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.207242 kubelet[3474]: E0909 23:44:55.206264 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.207242 kubelet[3474]: W0909 23:44:55.206286 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.207242 kubelet[3474]: E0909 23:44:55.206314 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:55.258475 kubelet[3474]: E0909 23:44:55.258429 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:55.258475 kubelet[3474]: W0909 23:44:55.258465 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:55.258695 kubelet[3474]: E0909 23:44:55.258504 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:56.080465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1005615758.mount: Deactivated successfully. Sep 9 23:44:56.768142 kubelet[3474]: E0909 23:44:56.767615 3474 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cs7p" podUID="f02a0402-c726-48be-86a1-a888ea61e0f5" Sep 9 23:44:57.346656 containerd[2031]: time="2025-09-09T23:44:57.346602627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:57.349253 containerd[2031]: time="2025-09-09T23:44:57.349201143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 9 23:44:57.351335 containerd[2031]: time="2025-09-09T23:44:57.351224691Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:57.356904 containerd[2031]: time="2025-09-09T23:44:57.356824947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:57.358208 containerd[2031]: time="2025-09-09T23:44:57.357964395Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.588754997s" Sep 9 23:44:57.358208 containerd[2031]: time="2025-09-09T23:44:57.358016247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 9 23:44:57.361196 containerd[2031]: time="2025-09-09T23:44:57.360684879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 23:44:57.392661 containerd[2031]: time="2025-09-09T23:44:57.392614395Z" level=info msg="CreateContainer within sandbox \"93edf4bee60e64f4ce405cedceaa4620a7c244fb532d4e00686f413503d9a3ed\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 23:44:57.413970 containerd[2031]: time="2025-09-09T23:44:57.412391907Z" level=info msg="Container 80d8da5d730a72d0e8b001914fcb9278a789f6f4ae8982688a6cbb89b731be01: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:57.430243 containerd[2031]: time="2025-09-09T23:44:57.430193331Z" level=info msg="CreateContainer within sandbox \"93edf4bee60e64f4ce405cedceaa4620a7c244fb532d4e00686f413503d9a3ed\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"80d8da5d730a72d0e8b001914fcb9278a789f6f4ae8982688a6cbb89b731be01\"" Sep 9 23:44:57.431552 containerd[2031]: time="2025-09-09T23:44:57.431496195Z" level=info msg="StartContainer for \"80d8da5d730a72d0e8b001914fcb9278a789f6f4ae8982688a6cbb89b731be01\"" Sep 9 23:44:57.435947 containerd[2031]: time="2025-09-09T23:44:57.435897783Z" level=info msg="connecting to shim 80d8da5d730a72d0e8b001914fcb9278a789f6f4ae8982688a6cbb89b731be01" address="unix:///run/containerd/s/a78164e6563198d321ab7704335fad589769b767e80219d5acc8194a866ef0c5" protocol=ttrpc version=3 Sep 9 23:44:57.480299 systemd[1]: Started cri-containerd-80d8da5d730a72d0e8b001914fcb9278a789f6f4ae8982688a6cbb89b731be01.scope - libcontainer container 80d8da5d730a72d0e8b001914fcb9278a789f6f4ae8982688a6cbb89b731be01. Sep 9 23:44:57.570505 containerd[2031]: time="2025-09-09T23:44:57.570461596Z" level=info msg="StartContainer for \"80d8da5d730a72d0e8b001914fcb9278a789f6f4ae8982688a6cbb89b731be01\" returns successfully" Sep 9 23:44:58.071398 kubelet[3474]: E0909 23:44:58.071281 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.071398 kubelet[3474]: W0909 23:44:58.071342 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.071398 kubelet[3474]: E0909 23:44:58.071376 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.073435 kubelet[3474]: E0909 23:44:58.073385 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.073435 kubelet[3474]: W0909 23:44:58.073423 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.073646 kubelet[3474]: E0909 23:44:58.073455 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.073854 kubelet[3474]: E0909 23:44:58.073813 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.073854 kubelet[3474]: W0909 23:44:58.073844 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.073962 kubelet[3474]: E0909 23:44:58.073868 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.074355 kubelet[3474]: E0909 23:44:58.074292 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.074355 kubelet[3474]: W0909 23:44:58.074346 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.074637 kubelet[3474]: E0909 23:44:58.074370 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.075593 kubelet[3474]: E0909 23:44:58.075534 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.075780 kubelet[3474]: W0909 23:44:58.075738 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.077238 kubelet[3474]: E0909 23:44:58.075784 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.077814 kubelet[3474]: E0909 23:44:58.077747 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.077814 kubelet[3474]: W0909 23:44:58.077805 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.077981 kubelet[3474]: E0909 23:44:58.077838 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.078333 kubelet[3474]: E0909 23:44:58.078297 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.078403 kubelet[3474]: W0909 23:44:58.078347 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.078403 kubelet[3474]: E0909 23:44:58.078373 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.079547 kubelet[3474]: E0909 23:44:58.079491 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.079547 kubelet[3474]: W0909 23:44:58.079529 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.079742 kubelet[3474]: E0909 23:44:58.079561 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.080702 kubelet[3474]: E0909 23:44:58.080652 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.080702 kubelet[3474]: W0909 23:44:58.080689 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.080864 kubelet[3474]: E0909 23:44:58.080721 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.081547 kubelet[3474]: E0909 23:44:58.081501 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.081547 kubelet[3474]: W0909 23:44:58.081537 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.081884 kubelet[3474]: E0909 23:44:58.081567 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.082432 kubelet[3474]: E0909 23:44:58.082212 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.082432 kubelet[3474]: W0909 23:44:58.082244 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.082432 kubelet[3474]: E0909 23:44:58.082283 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.083177 kubelet[3474]: E0909 23:44:58.083134 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.083177 kubelet[3474]: W0909 23:44:58.083170 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.085146 kubelet[3474]: E0909 23:44:58.083201 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.085492 kubelet[3474]: E0909 23:44:58.085449 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.085567 kubelet[3474]: W0909 23:44:58.085487 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.085567 kubelet[3474]: E0909 23:44:58.085523 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.087609 kubelet[3474]: E0909 23:44:58.087244 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.087609 kubelet[3474]: W0909 23:44:58.087284 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.087609 kubelet[3474]: E0909 23:44:58.087313 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.089446 kubelet[3474]: E0909 23:44:58.089393 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.089446 kubelet[3474]: W0909 23:44:58.089432 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.089654 kubelet[3474]: E0909 23:44:58.089477 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.108382 kubelet[3474]: E0909 23:44:58.108345 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.108862 kubelet[3474]: W0909 23:44:58.108576 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.108862 kubelet[3474]: E0909 23:44:58.108615 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.109219 kubelet[3474]: E0909 23:44:58.109195 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.109339 kubelet[3474]: W0909 23:44:58.109317 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.110148 kubelet[3474]: E0909 23:44:58.109433 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.111483 kubelet[3474]: E0909 23:44:58.111432 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.111483 kubelet[3474]: W0909 23:44:58.111471 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.111699 kubelet[3474]: E0909 23:44:58.111503 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.112033 kubelet[3474]: E0909 23:44:58.111997 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.112033 kubelet[3474]: W0909 23:44:58.112028 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.112229 kubelet[3474]: E0909 23:44:58.112053 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.113420 kubelet[3474]: E0909 23:44:58.113375 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.113420 kubelet[3474]: W0909 23:44:58.113412 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.113586 kubelet[3474]: E0909 23:44:58.113443 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.115373 kubelet[3474]: E0909 23:44:58.115321 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.115373 kubelet[3474]: W0909 23:44:58.115362 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.115600 kubelet[3474]: E0909 23:44:58.115394 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.115841 kubelet[3474]: E0909 23:44:58.115807 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.115841 kubelet[3474]: W0909 23:44:58.115835 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.115950 kubelet[3474]: E0909 23:44:58.115858 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.117363 kubelet[3474]: E0909 23:44:58.117317 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.117363 kubelet[3474]: W0909 23:44:58.117357 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.117588 kubelet[3474]: E0909 23:44:58.117388 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.117828 kubelet[3474]: E0909 23:44:58.117794 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.117828 kubelet[3474]: W0909 23:44:58.117821 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.117944 kubelet[3474]: E0909 23:44:58.117843 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.119296 kubelet[3474]: E0909 23:44:58.119219 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.119296 kubelet[3474]: W0909 23:44:58.119270 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.119296 kubelet[3474]: E0909 23:44:58.119301 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.120470 kubelet[3474]: E0909 23:44:58.120379 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.120470 kubelet[3474]: W0909 23:44:58.120416 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.120470 kubelet[3474]: E0909 23:44:58.120447 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.120825 kubelet[3474]: E0909 23:44:58.120771 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.120825 kubelet[3474]: W0909 23:44:58.120789 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.120825 kubelet[3474]: E0909 23:44:58.120809 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.122357 kubelet[3474]: E0909 23:44:58.122308 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.122357 kubelet[3474]: W0909 23:44:58.122346 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.122576 kubelet[3474]: E0909 23:44:58.122378 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.123273 kubelet[3474]: E0909 23:44:58.123224 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.123273 kubelet[3474]: W0909 23:44:58.123260 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.124716 kubelet[3474]: E0909 23:44:58.123286 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.124716 kubelet[3474]: E0909 23:44:58.123646 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.124716 kubelet[3474]: W0909 23:44:58.123664 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.124716 kubelet[3474]: E0909 23:44:58.123684 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.124716 kubelet[3474]: E0909 23:44:58.124586 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.124716 kubelet[3474]: W0909 23:44:58.124612 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.124716 kubelet[3474]: E0909 23:44:58.124640 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.126976 kubelet[3474]: E0909 23:44:58.126926 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.126976 kubelet[3474]: W0909 23:44:58.126964 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.127637 kubelet[3474]: E0909 23:44:58.126998 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.127746 kubelet[3474]: E0909 23:44:58.127696 3474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:44:58.127746 kubelet[3474]: W0909 23:44:58.127721 3474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:44:58.127898 kubelet[3474]: E0909 23:44:58.127747 3474 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:44:58.581728 containerd[2031]: time="2025-09-09T23:44:58.581651141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:58.583804 containerd[2031]: time="2025-09-09T23:44:58.583525001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 9 23:44:58.585792 containerd[2031]: time="2025-09-09T23:44:58.585749297Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:58.590226 containerd[2031]: time="2025-09-09T23:44:58.590178869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:58.591501 containerd[2031]: time="2025-09-09T23:44:58.591437021Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.230165054s" Sep 9 23:44:58.591626 containerd[2031]: time="2025-09-09T23:44:58.591503129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 9 23:44:58.601209 containerd[2031]: time="2025-09-09T23:44:58.601093937Z" level=info msg="CreateContainer within sandbox \"567c4260ea6adf3a1c4f8496516aa6171e0ac78a41a79ac845ff699ac545c5f0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 23:44:58.619573 containerd[2031]: time="2025-09-09T23:44:58.619505921Z" level=info msg="Container 9da6f18387a3f709065f8d97758e7bdfcc28baad9158ebed3581a4e0f26c6db0: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:58.644018 containerd[2031]: time="2025-09-09T23:44:58.643942241Z" level=info msg="CreateContainer within sandbox \"567c4260ea6adf3a1c4f8496516aa6171e0ac78a41a79ac845ff699ac545c5f0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9da6f18387a3f709065f8d97758e7bdfcc28baad9158ebed3581a4e0f26c6db0\"" Sep 9 23:44:58.646004 containerd[2031]: time="2025-09-09T23:44:58.645920225Z" level=info msg="StartContainer for \"9da6f18387a3f709065f8d97758e7bdfcc28baad9158ebed3581a4e0f26c6db0\"" Sep 9 23:44:58.649370 containerd[2031]: time="2025-09-09T23:44:58.649291253Z" level=info msg="connecting to shim 9da6f18387a3f709065f8d97758e7bdfcc28baad9158ebed3581a4e0f26c6db0" address="unix:///run/containerd/s/37498f60e2d6cb99ab88e265605557a277aff85e17930538f33b472e10b34dd8" protocol=ttrpc version=3 Sep 9 23:44:58.707419 systemd[1]: Started cri-containerd-9da6f18387a3f709065f8d97758e7bdfcc28baad9158ebed3581a4e0f26c6db0.scope - libcontainer container 9da6f18387a3f709065f8d97758e7bdfcc28baad9158ebed3581a4e0f26c6db0. Sep 9 23:44:58.769221 kubelet[3474]: E0909 23:44:58.767473 3474 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cs7p" podUID="f02a0402-c726-48be-86a1-a888ea61e0f5" Sep 9 23:44:58.802705 containerd[2031]: time="2025-09-09T23:44:58.802637706Z" level=info msg="StartContainer for \"9da6f18387a3f709065f8d97758e7bdfcc28baad9158ebed3581a4e0f26c6db0\" returns successfully" Sep 9 23:44:58.827479 systemd[1]: cri-containerd-9da6f18387a3f709065f8d97758e7bdfcc28baad9158ebed3581a4e0f26c6db0.scope: Deactivated successfully. Sep 9 23:44:58.838885 containerd[2031]: time="2025-09-09T23:44:58.837915342Z" level=info msg="received exit event container_id:\"9da6f18387a3f709065f8d97758e7bdfcc28baad9158ebed3581a4e0f26c6db0\" id:\"9da6f18387a3f709065f8d97758e7bdfcc28baad9158ebed3581a4e0f26c6db0\" pid:4157 exited_at:{seconds:1757461498 nanos:836840058}" Sep 9 23:44:58.838885 containerd[2031]: time="2025-09-09T23:44:58.837929982Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9da6f18387a3f709065f8d97758e7bdfcc28baad9158ebed3581a4e0f26c6db0\" id:\"9da6f18387a3f709065f8d97758e7bdfcc28baad9158ebed3581a4e0f26c6db0\" pid:4157 exited_at:{seconds:1757461498 nanos:836840058}" Sep 9 23:44:58.882586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9da6f18387a3f709065f8d97758e7bdfcc28baad9158ebed3581a4e0f26c6db0-rootfs.mount: Deactivated successfully. Sep 9 23:44:59.042885 kubelet[3474]: I0909 23:44:59.042330 3474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:44:59.071208 kubelet[3474]: I0909 23:44:59.071103 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b888b474-8qgdr" podStartSLOduration=3.477050127 podStartE2EDuration="6.071080192s" podCreationTimestamp="2025-09-09 23:44:53 +0000 UTC" firstStartedPulling="2025-09-09 23:44:54.765840446 +0000 UTC m=+30.223621735" lastFinishedPulling="2025-09-09 23:44:57.359870511 +0000 UTC m=+32.817651800" observedRunningTime="2025-09-09 23:44:58.146689671 +0000 UTC m=+33.604470996" watchObservedRunningTime="2025-09-09 23:44:59.071080192 +0000 UTC m=+34.528861505" Sep 9 23:45:00.056165 containerd[2031]: time="2025-09-09T23:45:00.055081408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 23:45:00.767535 kubelet[3474]: E0909 23:45:00.767422 3474 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cs7p" podUID="f02a0402-c726-48be-86a1-a888ea61e0f5" Sep 9 23:45:02.768081 kubelet[3474]: E0909 23:45:02.768026 3474 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cs7p" podUID="f02a0402-c726-48be-86a1-a888ea61e0f5" Sep 9 23:45:03.402552 containerd[2031]: time="2025-09-09T23:45:03.402476421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:03.404825 containerd[2031]: time="2025-09-09T23:45:03.404429241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 9 23:45:03.406823 containerd[2031]: time="2025-09-09T23:45:03.406765005Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:03.411566 containerd[2031]: time="2025-09-09T23:45:03.411517209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:03.412767 containerd[2031]: time="2025-09-09T23:45:03.412709805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.356496089s" Sep 9 23:45:03.412874 containerd[2031]: time="2025-09-09T23:45:03.412764153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 9 23:45:03.422520 containerd[2031]: time="2025-09-09T23:45:03.422450421Z" level=info msg="CreateContainer within sandbox \"567c4260ea6adf3a1c4f8496516aa6171e0ac78a41a79ac845ff699ac545c5f0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 23:45:03.451213 containerd[2031]: time="2025-09-09T23:45:03.449509173Z" level=info msg="Container 84af876ade60cb4bfd7835aea63b07b671aab1686e73c07c494e00c0f3e46ce7: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:03.469702 containerd[2031]: time="2025-09-09T23:45:03.469618929Z" level=info msg="CreateContainer within sandbox \"567c4260ea6adf3a1c4f8496516aa6171e0ac78a41a79ac845ff699ac545c5f0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"84af876ade60cb4bfd7835aea63b07b671aab1686e73c07c494e00c0f3e46ce7\"" Sep 9 23:45:03.470890 containerd[2031]: time="2025-09-09T23:45:03.470831517Z" level=info msg="StartContainer for \"84af876ade60cb4bfd7835aea63b07b671aab1686e73c07c494e00c0f3e46ce7\"" Sep 9 23:45:03.475273 containerd[2031]: time="2025-09-09T23:45:03.475199013Z" level=info msg="connecting to shim 84af876ade60cb4bfd7835aea63b07b671aab1686e73c07c494e00c0f3e46ce7" address="unix:///run/containerd/s/37498f60e2d6cb99ab88e265605557a277aff85e17930538f33b472e10b34dd8" protocol=ttrpc version=3 Sep 9 23:45:03.516439 systemd[1]: Started cri-containerd-84af876ade60cb4bfd7835aea63b07b671aab1686e73c07c494e00c0f3e46ce7.scope - libcontainer container 84af876ade60cb4bfd7835aea63b07b671aab1686e73c07c494e00c0f3e46ce7. Sep 9 23:45:03.613285 containerd[2031]: time="2025-09-09T23:45:03.613209730Z" level=info msg="StartContainer for \"84af876ade60cb4bfd7835aea63b07b671aab1686e73c07c494e00c0f3e46ce7\" returns successfully" Sep 9 23:45:04.551499 containerd[2031]: time="2025-09-09T23:45:04.551404451Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:45:04.557357 systemd[1]: cri-containerd-84af876ade60cb4bfd7835aea63b07b671aab1686e73c07c494e00c0f3e46ce7.scope: Deactivated successfully. Sep 9 23:45:04.558862 systemd[1]: cri-containerd-84af876ade60cb4bfd7835aea63b07b671aab1686e73c07c494e00c0f3e46ce7.scope: Consumed 904ms CPU time, 185.7M memory peak, 165.8M written to disk. Sep 9 23:45:04.565584 containerd[2031]: time="2025-09-09T23:45:04.565215455Z" level=info msg="received exit event container_id:\"84af876ade60cb4bfd7835aea63b07b671aab1686e73c07c494e00c0f3e46ce7\" id:\"84af876ade60cb4bfd7835aea63b07b671aab1686e73c07c494e00c0f3e46ce7\" pid:4219 exited_at:{seconds:1757461504 nanos:564488051}" Sep 9 23:45:04.565873 containerd[2031]: time="2025-09-09T23:45:04.565813355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84af876ade60cb4bfd7835aea63b07b671aab1686e73c07c494e00c0f3e46ce7\" id:\"84af876ade60cb4bfd7835aea63b07b671aab1686e73c07c494e00c0f3e46ce7\" pid:4219 exited_at:{seconds:1757461504 nanos:564488051}" Sep 9 23:45:04.593156 kubelet[3474]: I0909 23:45:04.592985 3474 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 23:45:04.622950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84af876ade60cb4bfd7835aea63b07b671aab1686e73c07c494e00c0f3e46ce7-rootfs.mount: Deactivated successfully. Sep 9 23:45:04.768570 kubelet[3474]: I0909 23:45:04.768423 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63153ad7-dcff-4576-b8ff-089f3921a547-config-volume\") pod \"coredns-674b8bbfcf-q5bjz\" (UID: \"63153ad7-dcff-4576-b8ff-089f3921a547\") " pod="kube-system/coredns-674b8bbfcf-q5bjz" Sep 9 23:45:04.768570 kubelet[3474]: I0909 23:45:04.768527 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fvv4\" (UniqueName: \"kubernetes.io/projected/63153ad7-dcff-4576-b8ff-089f3921a547-kube-api-access-8fvv4\") pod \"coredns-674b8bbfcf-q5bjz\" (UID: \"63153ad7-dcff-4576-b8ff-089f3921a547\") " pod="kube-system/coredns-674b8bbfcf-q5bjz" Sep 9 23:45:04.784935 systemd[1]: Created slice kubepods-burstable-pod63153ad7_dcff_4576_b8ff_089f3921a547.slice - libcontainer container kubepods-burstable-pod63153ad7_dcff_4576_b8ff_089f3921a547.slice. Sep 9 23:45:04.834731 systemd[1]: Created slice kubepods-besteffort-pod0b026207_8920_4e84_86a1_65c3e05a13ec.slice - libcontainer container kubepods-besteffort-pod0b026207_8920_4e84_86a1_65c3e05a13ec.slice. Sep 9 23:45:04.854541 systemd[1]: Created slice kubepods-besteffort-pod2b4e4851_12cf_4217_b8be_222300d3bf0b.slice - libcontainer container kubepods-besteffort-pod2b4e4851_12cf_4217_b8be_222300d3bf0b.slice. Sep 9 23:45:04.869054 kubelet[3474]: I0909 23:45:04.868928 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvt57\" (UniqueName: \"kubernetes.io/projected/0b026207-8920-4e84-86a1-65c3e05a13ec-kube-api-access-gvt57\") pod \"whisker-64c7587ffc-vrhlh\" (UID: \"0b026207-8920-4e84-86a1-65c3e05a13ec\") " pod="calico-system/whisker-64c7587ffc-vrhlh" Sep 9 23:45:04.869385 kubelet[3474]: I0909 23:45:04.869283 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b026207-8920-4e84-86a1-65c3e05a13ec-whisker-ca-bundle\") pod \"whisker-64c7587ffc-vrhlh\" (UID: \"0b026207-8920-4e84-86a1-65c3e05a13ec\") " pod="calico-system/whisker-64c7587ffc-vrhlh" Sep 9 23:45:04.869558 kubelet[3474]: I0909 23:45:04.869332 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b4e4851-12cf-4217-b8be-222300d3bf0b-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-5xd2t\" (UID: \"2b4e4851-12cf-4217-b8be-222300d3bf0b\") " pod="calico-system/goldmane-54d579b49d-5xd2t" Sep 9 23:45:04.869558 kubelet[3474]: I0909 23:45:04.869524 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0b026207-8920-4e84-86a1-65c3e05a13ec-whisker-backend-key-pair\") pod \"whisker-64c7587ffc-vrhlh\" (UID: \"0b026207-8920-4e84-86a1-65c3e05a13ec\") " pod="calico-system/whisker-64c7587ffc-vrhlh" Sep 9 23:45:04.869960 kubelet[3474]: I0909 23:45:04.869751 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2b4e4851-12cf-4217-b8be-222300d3bf0b-goldmane-key-pair\") pod \"goldmane-54d579b49d-5xd2t\" (UID: \"2b4e4851-12cf-4217-b8be-222300d3bf0b\") " pod="calico-system/goldmane-54d579b49d-5xd2t" Sep 9 23:45:04.870232 kubelet[3474]: I0909 23:45:04.870146 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b4e4851-12cf-4217-b8be-222300d3bf0b-config\") pod \"goldmane-54d579b49d-5xd2t\" (UID: \"2b4e4851-12cf-4217-b8be-222300d3bf0b\") " pod="calico-system/goldmane-54d579b49d-5xd2t" Sep 9 23:45:04.870232 kubelet[3474]: I0909 23:45:04.870200 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5khbl\" (UniqueName: \"kubernetes.io/projected/2b4e4851-12cf-4217-b8be-222300d3bf0b-kube-api-access-5khbl\") pod \"goldmane-54d579b49d-5xd2t\" (UID: \"2b4e4851-12cf-4217-b8be-222300d3bf0b\") " pod="calico-system/goldmane-54d579b49d-5xd2t" Sep 9 23:45:04.877808 systemd[1]: Created slice kubepods-besteffort-podf02a0402_c726_48be_86a1_a888ea61e0f5.slice - libcontainer container kubepods-besteffort-podf02a0402_c726_48be_86a1_a888ea61e0f5.slice. Sep 9 23:45:04.904042 containerd[2031]: time="2025-09-09T23:45:04.903943404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cs7p,Uid:f02a0402-c726-48be-86a1-a888ea61e0f5,Namespace:calico-system,Attempt:0,}" Sep 9 23:45:04.945045 systemd[1]: Created slice kubepods-besteffort-podd09b54b6_c901_408b_b515_b6aeff15c64c.slice - libcontainer container kubepods-besteffort-podd09b54b6_c901_408b_b515_b6aeff15c64c.slice. Sep 9 23:45:04.971371 kubelet[3474]: I0909 23:45:04.971306 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmwck\" (UniqueName: \"kubernetes.io/projected/d09b54b6-c901-408b-b515-b6aeff15c64c-kube-api-access-hmwck\") pod \"calico-kube-controllers-6c9d778fc9-7r6b7\" (UID: \"d09b54b6-c901-408b-b515-b6aeff15c64c\") " pod="calico-system/calico-kube-controllers-6c9d778fc9-7r6b7" Sep 9 23:45:04.992345 kubelet[3474]: I0909 23:45:04.971411 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d09b54b6-c901-408b-b515-b6aeff15c64c-tigera-ca-bundle\") pod \"calico-kube-controllers-6c9d778fc9-7r6b7\" (UID: \"d09b54b6-c901-408b-b515-b6aeff15c64c\") " pod="calico-system/calico-kube-controllers-6c9d778fc9-7r6b7" Sep 9 23:45:04.992345 kubelet[3474]: I0909 23:45:04.971672 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/23f2cb82-35f9-49b6-b5c8-108004fea67a-calico-apiserver-certs\") pod \"calico-apiserver-7fcccfb84d-vl5gd\" (UID: \"23f2cb82-35f9-49b6-b5c8-108004fea67a\") " pod="calico-apiserver/calico-apiserver-7fcccfb84d-vl5gd" Sep 9 23:45:04.992345 kubelet[3474]: I0909 23:45:04.972250 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf57r\" (UniqueName: \"kubernetes.io/projected/23f2cb82-35f9-49b6-b5c8-108004fea67a-kube-api-access-kf57r\") pod \"calico-apiserver-7fcccfb84d-vl5gd\" (UID: \"23f2cb82-35f9-49b6-b5c8-108004fea67a\") " pod="calico-apiserver/calico-apiserver-7fcccfb84d-vl5gd" Sep 9 23:45:05.060522 systemd[1]: Created slice kubepods-besteffort-podab0c5703_81da_4d19_b1df_7707bee03dbd.slice - libcontainer container kubepods-besteffort-podab0c5703_81da_4d19_b1df_7707bee03dbd.slice. Sep 9 23:45:05.074819 kubelet[3474]: I0909 23:45:05.074764 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ab0c5703-81da-4d19-b1df-7707bee03dbd-calico-apiserver-certs\") pod \"calico-apiserver-7fcccfb84d-jkhpk\" (UID: \"ab0c5703-81da-4d19-b1df-7707bee03dbd\") " pod="calico-apiserver/calico-apiserver-7fcccfb84d-jkhpk" Sep 9 23:45:05.075107 kubelet[3474]: I0909 23:45:05.075075 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd4zr\" (UniqueName: \"kubernetes.io/projected/ab0c5703-81da-4d19-b1df-7707bee03dbd-kube-api-access-qd4zr\") pod \"calico-apiserver-7fcccfb84d-jkhpk\" (UID: \"ab0c5703-81da-4d19-b1df-7707bee03dbd\") " pod="calico-apiserver/calico-apiserver-7fcccfb84d-jkhpk" Sep 9 23:45:05.110174 containerd[2031]: time="2025-09-09T23:45:05.108278145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q5bjz,Uid:63153ad7-dcff-4576-b8ff-089f3921a547,Namespace:kube-system,Attempt:0,}" Sep 9 23:45:05.119663 systemd[1]: Created slice kubepods-besteffort-pod23f2cb82_35f9_49b6_b5c8_108004fea67a.slice - libcontainer container kubepods-besteffort-pod23f2cb82_35f9_49b6_b5c8_108004fea67a.slice. Sep 9 23:45:05.164257 systemd[1]: Created slice kubepods-burstable-pod385d8235_6e7a_4851_9a04_a461b1d648de.slice - libcontainer container kubepods-burstable-pod385d8235_6e7a_4851_9a04_a461b1d648de.slice. Sep 9 23:45:05.168344 containerd[2031]: time="2025-09-09T23:45:05.168090118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64c7587ffc-vrhlh,Uid:0b026207-8920-4e84-86a1-65c3e05a13ec,Namespace:calico-system,Attempt:0,}" Sep 9 23:45:05.169003 containerd[2031]: time="2025-09-09T23:45:05.168292114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcccfb84d-vl5gd,Uid:23f2cb82-35f9-49b6-b5c8-108004fea67a,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:45:05.174678 containerd[2031]: time="2025-09-09T23:45:05.174104290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5xd2t,Uid:2b4e4851-12cf-4217-b8be-222300d3bf0b,Namespace:calico-system,Attempt:0,}" Sep 9 23:45:05.179098 kubelet[3474]: I0909 23:45:05.177783 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/385d8235-6e7a-4851-9a04-a461b1d648de-config-volume\") pod \"coredns-674b8bbfcf-h2h7d\" (UID: \"385d8235-6e7a-4851-9a04-a461b1d648de\") " pod="kube-system/coredns-674b8bbfcf-h2h7d" Sep 9 23:45:05.179098 kubelet[3474]: I0909 23:45:05.177885 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlljq\" (UniqueName: \"kubernetes.io/projected/385d8235-6e7a-4851-9a04-a461b1d648de-kube-api-access-nlljq\") pod \"coredns-674b8bbfcf-h2h7d\" (UID: \"385d8235-6e7a-4851-9a04-a461b1d648de\") " pod="kube-system/coredns-674b8bbfcf-h2h7d" Sep 9 23:45:05.239593 containerd[2031]: time="2025-09-09T23:45:05.239363134Z" level=error msg="collecting metrics for 84af876ade60cb4bfd7835aea63b07b671aab1686e73c07c494e00c0f3e46ce7" error="ttrpc: closed" Sep 9 23:45:05.279144 containerd[2031]: time="2025-09-09T23:45:05.278206414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c9d778fc9-7r6b7,Uid:d09b54b6-c901-408b-b515-b6aeff15c64c,Namespace:calico-system,Attempt:0,}" Sep 9 23:45:05.394183 containerd[2031]: time="2025-09-09T23:45:05.393911579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcccfb84d-jkhpk,Uid:ab0c5703-81da-4d19-b1df-7707bee03dbd,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:45:05.467237 containerd[2031]: time="2025-09-09T23:45:05.466972151Z" level=error msg="Failed to destroy network for sandbox \"37e727275eb873260fc1fc60e5946f917344ac8f59ca87db43cac196e7285ef8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.475206 containerd[2031]: time="2025-09-09T23:45:05.474726779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cs7p,Uid:f02a0402-c726-48be-86a1-a888ea61e0f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e727275eb873260fc1fc60e5946f917344ac8f59ca87db43cac196e7285ef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.475605 kubelet[3474]: E0909 23:45:05.475159 3474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e727275eb873260fc1fc60e5946f917344ac8f59ca87db43cac196e7285ef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.475605 kubelet[3474]: E0909 23:45:05.475543 3474 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e727275eb873260fc1fc60e5946f917344ac8f59ca87db43cac196e7285ef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7cs7p" Sep 9 23:45:05.475961 kubelet[3474]: E0909 23:45:05.475582 3474 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e727275eb873260fc1fc60e5946f917344ac8f59ca87db43cac196e7285ef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7cs7p" Sep 9 23:45:05.476280 kubelet[3474]: E0909 23:45:05.476162 3474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7cs7p_calico-system(f02a0402-c726-48be-86a1-a888ea61e0f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7cs7p_calico-system(f02a0402-c726-48be-86a1-a888ea61e0f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37e727275eb873260fc1fc60e5946f917344ac8f59ca87db43cac196e7285ef8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7cs7p" podUID="f02a0402-c726-48be-86a1-a888ea61e0f5" Sep 9 23:45:05.495875 containerd[2031]: time="2025-09-09T23:45:05.495516875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2h7d,Uid:385d8235-6e7a-4851-9a04-a461b1d648de,Namespace:kube-system,Attempt:0,}" Sep 9 23:45:05.658100 systemd[1]: run-netns-cni\x2d5b970f94\x2dd09f\x2ddfdc\x2df84f\x2d95480d53c27c.mount: Deactivated successfully. Sep 9 23:45:05.686756 containerd[2031]: time="2025-09-09T23:45:05.686619288Z" level=error msg="Failed to destroy network for sandbox \"203929cce4e400a490b2093215543396b63b639e1a23a6c6a26f0e2d4b3a813a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.694658 systemd[1]: run-netns-cni\x2d48f38bd9\x2d5d6a\x2dd7ae\x2d07c2\x2dffa70bc49e6a.mount: Deactivated successfully. Sep 9 23:45:05.696978 containerd[2031]: time="2025-09-09T23:45:05.696810468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q5bjz,Uid:63153ad7-dcff-4576-b8ff-089f3921a547,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"203929cce4e400a490b2093215543396b63b639e1a23a6c6a26f0e2d4b3a813a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.697842 containerd[2031]: time="2025-09-09T23:45:05.697482492Z" level=error msg="Failed to destroy network for sandbox \"6af99c3e13c516b2892d4837dcc980339e5a78900ed748ad711d70bcfbd79a95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.699445 kubelet[3474]: E0909 23:45:05.697680 3474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203929cce4e400a490b2093215543396b63b639e1a23a6c6a26f0e2d4b3a813a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.699445 kubelet[3474]: E0909 23:45:05.697755 3474 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203929cce4e400a490b2093215543396b63b639e1a23a6c6a26f0e2d4b3a813a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q5bjz" Sep 9 23:45:05.699445 kubelet[3474]: E0909 23:45:05.697790 3474 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203929cce4e400a490b2093215543396b63b639e1a23a6c6a26f0e2d4b3a813a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q5bjz" Sep 9 23:45:05.700026 kubelet[3474]: E0909 23:45:05.697877 3474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-q5bjz_kube-system(63153ad7-dcff-4576-b8ff-089f3921a547)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-q5bjz_kube-system(63153ad7-dcff-4576-b8ff-089f3921a547)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"203929cce4e400a490b2093215543396b63b639e1a23a6c6a26f0e2d4b3a813a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-q5bjz" podUID="63153ad7-dcff-4576-b8ff-089f3921a547" Sep 9 23:45:05.706448 containerd[2031]: time="2025-09-09T23:45:05.706264296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcccfb84d-vl5gd,Uid:23f2cb82-35f9-49b6-b5c8-108004fea67a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6af99c3e13c516b2892d4837dcc980339e5a78900ed748ad711d70bcfbd79a95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.707034 kubelet[3474]: E0909 23:45:05.706866 3474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6af99c3e13c516b2892d4837dcc980339e5a78900ed748ad711d70bcfbd79a95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.707034 kubelet[3474]: E0909 23:45:05.707000 3474 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6af99c3e13c516b2892d4837dcc980339e5a78900ed748ad711d70bcfbd79a95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcccfb84d-vl5gd" Sep 9 23:45:05.707034 kubelet[3474]: E0909 23:45:05.707035 3474 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6af99c3e13c516b2892d4837dcc980339e5a78900ed748ad711d70bcfbd79a95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcccfb84d-vl5gd" Sep 9 23:45:05.708826 kubelet[3474]: E0909 23:45:05.708755 3474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcccfb84d-vl5gd_calico-apiserver(23f2cb82-35f9-49b6-b5c8-108004fea67a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcccfb84d-vl5gd_calico-apiserver(23f2cb82-35f9-49b6-b5c8-108004fea67a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6af99c3e13c516b2892d4837dcc980339e5a78900ed748ad711d70bcfbd79a95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcccfb84d-vl5gd" podUID="23f2cb82-35f9-49b6-b5c8-108004fea67a" Sep 9 23:45:05.709038 systemd[1]: run-netns-cni\x2d7950ce9e\x2dbf59\x2d2d5b\x2d8764\x2d963e62440aeb.mount: Deactivated successfully. Sep 9 23:45:05.726169 containerd[2031]: time="2025-09-09T23:45:05.726076117Z" level=error msg="Failed to destroy network for sandbox \"72f6e913cef053be00da7d707f6bebcfd48c3254032e832c442ae37f9a89c009\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.730255 containerd[2031]: time="2025-09-09T23:45:05.729621361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c9d778fc9-7r6b7,Uid:d09b54b6-c901-408b-b515-b6aeff15c64c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f6e913cef053be00da7d707f6bebcfd48c3254032e832c442ae37f9a89c009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.730425 kubelet[3474]: E0909 23:45:05.730065 3474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f6e913cef053be00da7d707f6bebcfd48c3254032e832c442ae37f9a89c009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.731348 systemd[1]: run-netns-cni\x2d0df40fc2\x2d00b7\x2d5415\x2d75d9\x2d8bb4db52ccb5.mount: Deactivated successfully. Sep 9 23:45:05.731587 kubelet[3474]: E0909 23:45:05.731492 3474 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f6e913cef053be00da7d707f6bebcfd48c3254032e832c442ae37f9a89c009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c9d778fc9-7r6b7" Sep 9 23:45:05.731587 kubelet[3474]: E0909 23:45:05.731540 3474 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f6e913cef053be00da7d707f6bebcfd48c3254032e832c442ae37f9a89c009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c9d778fc9-7r6b7" Sep 9 23:45:05.731980 kubelet[3474]: E0909 23:45:05.731637 3474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c9d778fc9-7r6b7_calico-system(d09b54b6-c901-408b-b515-b6aeff15c64c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c9d778fc9-7r6b7_calico-system(d09b54b6-c901-408b-b515-b6aeff15c64c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72f6e913cef053be00da7d707f6bebcfd48c3254032e832c442ae37f9a89c009\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c9d778fc9-7r6b7" podUID="d09b54b6-c901-408b-b515-b6aeff15c64c" Sep 9 23:45:05.742877 containerd[2031]: time="2025-09-09T23:45:05.742789405Z" level=error msg="Failed to destroy network for sandbox \"28d47eb620d41168cfef286a3eacacd87994a3635c6f0bf5b900d1e8aaf92887\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.749635 systemd[1]: run-netns-cni\x2d4a3298c9\x2d7b8d\x2d1375\x2d325f\x2d06c007d1b6a4.mount: Deactivated successfully. Sep 9 23:45:05.752095 containerd[2031]: time="2025-09-09T23:45:05.749917705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5xd2t,Uid:2b4e4851-12cf-4217-b8be-222300d3bf0b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"28d47eb620d41168cfef286a3eacacd87994a3635c6f0bf5b900d1e8aaf92887\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.752312 kubelet[3474]: E0909 23:45:05.750647 3474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28d47eb620d41168cfef286a3eacacd87994a3635c6f0bf5b900d1e8aaf92887\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.753545 kubelet[3474]: E0909 23:45:05.752508 3474 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28d47eb620d41168cfef286a3eacacd87994a3635c6f0bf5b900d1e8aaf92887\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-5xd2t" Sep 9 23:45:05.753545 kubelet[3474]: E0909 23:45:05.752734 3474 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28d47eb620d41168cfef286a3eacacd87994a3635c6f0bf5b900d1e8aaf92887\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-5xd2t" Sep 9 23:45:05.754602 kubelet[3474]: E0909 23:45:05.753806 3474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-5xd2t_calico-system(2b4e4851-12cf-4217-b8be-222300d3bf0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-5xd2t_calico-system(2b4e4851-12cf-4217-b8be-222300d3bf0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28d47eb620d41168cfef286a3eacacd87994a3635c6f0bf5b900d1e8aaf92887\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-5xd2t" podUID="2b4e4851-12cf-4217-b8be-222300d3bf0b" Sep 9 23:45:05.774567 containerd[2031]: time="2025-09-09T23:45:05.774370801Z" level=error msg="Failed to destroy network for sandbox \"af38642ee02234e9091c75ff6efce1fae955e5ba67aac7940624784954da02e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.779182 containerd[2031]: time="2025-09-09T23:45:05.778024333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64c7587ffc-vrhlh,Uid:0b026207-8920-4e84-86a1-65c3e05a13ec,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af38642ee02234e9091c75ff6efce1fae955e5ba67aac7940624784954da02e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.780400 kubelet[3474]: E0909 23:45:05.779670 3474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af38642ee02234e9091c75ff6efce1fae955e5ba67aac7940624784954da02e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.780400 kubelet[3474]: E0909 23:45:05.779768 3474 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af38642ee02234e9091c75ff6efce1fae955e5ba67aac7940624784954da02e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64c7587ffc-vrhlh" Sep 9 23:45:05.780400 kubelet[3474]: E0909 23:45:05.779800 3474 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af38642ee02234e9091c75ff6efce1fae955e5ba67aac7940624784954da02e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64c7587ffc-vrhlh" Sep 9 23:45:05.780799 kubelet[3474]: E0909 23:45:05.779886 3474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64c7587ffc-vrhlh_calico-system(0b026207-8920-4e84-86a1-65c3e05a13ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64c7587ffc-vrhlh_calico-system(0b026207-8920-4e84-86a1-65c3e05a13ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af38642ee02234e9091c75ff6efce1fae955e5ba67aac7940624784954da02e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64c7587ffc-vrhlh" podUID="0b026207-8920-4e84-86a1-65c3e05a13ec" Sep 9 23:45:05.793711 containerd[2031]: time="2025-09-09T23:45:05.793638913Z" level=error msg="Failed to destroy network for sandbox \"757d8731c8c7c85adc7fcfc40f6ce0ea7c06e4f6db46f573f61320fce5f2c38d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.796136 containerd[2031]: time="2025-09-09T23:45:05.796033225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcccfb84d-jkhpk,Uid:ab0c5703-81da-4d19-b1df-7707bee03dbd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"757d8731c8c7c85adc7fcfc40f6ce0ea7c06e4f6db46f573f61320fce5f2c38d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.796685 kubelet[3474]: E0909 23:45:05.796611 3474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"757d8731c8c7c85adc7fcfc40f6ce0ea7c06e4f6db46f573f61320fce5f2c38d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.796781 kubelet[3474]: E0909 23:45:05.796695 3474 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"757d8731c8c7c85adc7fcfc40f6ce0ea7c06e4f6db46f573f61320fce5f2c38d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcccfb84d-jkhpk" Sep 9 23:45:05.796781 kubelet[3474]: E0909 23:45:05.796732 3474 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"757d8731c8c7c85adc7fcfc40f6ce0ea7c06e4f6db46f573f61320fce5f2c38d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcccfb84d-jkhpk" Sep 9 23:45:05.796902 kubelet[3474]: E0909 23:45:05.796812 3474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcccfb84d-jkhpk_calico-apiserver(ab0c5703-81da-4d19-b1df-7707bee03dbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcccfb84d-jkhpk_calico-apiserver(ab0c5703-81da-4d19-b1df-7707bee03dbd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"757d8731c8c7c85adc7fcfc40f6ce0ea7c06e4f6db46f573f61320fce5f2c38d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcccfb84d-jkhpk" podUID="ab0c5703-81da-4d19-b1df-7707bee03dbd" Sep 9 23:45:05.820077 containerd[2031]: time="2025-09-09T23:45:05.819915253Z" level=error msg="Failed to destroy network for sandbox \"b8ce7fc7a87fe2332e0e766e66455e9aa66406cde3c91dfd2b3f0eb451bc5684\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.821588 containerd[2031]: time="2025-09-09T23:45:05.821434093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2h7d,Uid:385d8235-6e7a-4851-9a04-a461b1d648de,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8ce7fc7a87fe2332e0e766e66455e9aa66406cde3c91dfd2b3f0eb451bc5684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.822084 kubelet[3474]: E0909 23:45:05.822020 3474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8ce7fc7a87fe2332e0e766e66455e9aa66406cde3c91dfd2b3f0eb451bc5684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:45:05.822254 kubelet[3474]: E0909 23:45:05.822227 3474 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8ce7fc7a87fe2332e0e766e66455e9aa66406cde3c91dfd2b3f0eb451bc5684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h2h7d" Sep 9 23:45:05.822353 kubelet[3474]: E0909 23:45:05.822269 3474 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8ce7fc7a87fe2332e0e766e66455e9aa66406cde3c91dfd2b3f0eb451bc5684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h2h7d" Sep 9 23:45:05.822439 kubelet[3474]: E0909 23:45:05.822352 3474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h2h7d_kube-system(385d8235-6e7a-4851-9a04-a461b1d648de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h2h7d_kube-system(385d8235-6e7a-4851-9a04-a461b1d648de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8ce7fc7a87fe2332e0e766e66455e9aa66406cde3c91dfd2b3f0eb451bc5684\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h2h7d" podUID="385d8235-6e7a-4851-9a04-a461b1d648de" Sep 9 23:45:06.104313 containerd[2031]: time="2025-09-09T23:45:06.104038474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 23:45:06.618937 systemd[1]: run-netns-cni\x2da5075be9\x2de69b\x2d4005\x2df621\x2d0e8f39578514.mount: Deactivated successfully. Sep 9 23:45:06.619104 systemd[1]: run-netns-cni\x2dd8fccfb6\x2d42b7\x2de2ef\x2d9d09\x2dc094c9897f0d.mount: Deactivated successfully. Sep 9 23:45:06.619256 systemd[1]: run-netns-cni\x2d903983b7\x2d8358\x2d8d39\x2dfeb4\x2d001d2bd79251.mount: Deactivated successfully. Sep 9 23:45:14.289704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2718957878.mount: Deactivated successfully. Sep 9 23:45:14.348650 containerd[2031]: time="2025-09-09T23:45:14.348581575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:14.350017 containerd[2031]: time="2025-09-09T23:45:14.349944955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 9 23:45:14.351276 containerd[2031]: time="2025-09-09T23:45:14.351174631Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:14.354652 containerd[2031]: time="2025-09-09T23:45:14.354572239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:14.356431 containerd[2031]: time="2025-09-09T23:45:14.355645963Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 8.251379573s" Sep 9 23:45:14.356431 containerd[2031]: time="2025-09-09T23:45:14.355703071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 9 23:45:14.389369 containerd[2031]: time="2025-09-09T23:45:14.389293568Z" level=info msg="CreateContainer within sandbox \"567c4260ea6adf3a1c4f8496516aa6171e0ac78a41a79ac845ff699ac545c5f0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 23:45:14.409667 containerd[2031]: time="2025-09-09T23:45:14.406375208Z" level=info msg="Container 52a8ddb29f2b8011356b1cc49471bf287ee1724a59b958d7e7d7ad21df75b3e3: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:14.431610 containerd[2031]: time="2025-09-09T23:45:14.431559140Z" level=info msg="CreateContainer within sandbox \"567c4260ea6adf3a1c4f8496516aa6171e0ac78a41a79ac845ff699ac545c5f0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"52a8ddb29f2b8011356b1cc49471bf287ee1724a59b958d7e7d7ad21df75b3e3\"" Sep 9 23:45:14.433390 containerd[2031]: time="2025-09-09T23:45:14.433346636Z" level=info msg="StartContainer for \"52a8ddb29f2b8011356b1cc49471bf287ee1724a59b958d7e7d7ad21df75b3e3\"" Sep 9 23:45:14.438870 containerd[2031]: time="2025-09-09T23:45:14.438755408Z" level=info msg="connecting to shim 52a8ddb29f2b8011356b1cc49471bf287ee1724a59b958d7e7d7ad21df75b3e3" address="unix:///run/containerd/s/37498f60e2d6cb99ab88e265605557a277aff85e17930538f33b472e10b34dd8" protocol=ttrpc version=3 Sep 9 23:45:14.477441 systemd[1]: Started cri-containerd-52a8ddb29f2b8011356b1cc49471bf287ee1724a59b958d7e7d7ad21df75b3e3.scope - libcontainer container 52a8ddb29f2b8011356b1cc49471bf287ee1724a59b958d7e7d7ad21df75b3e3. Sep 9 23:45:14.567467 containerd[2031]: time="2025-09-09T23:45:14.567284960Z" level=info msg="StartContainer for \"52a8ddb29f2b8011356b1cc49471bf287ee1724a59b958d7e7d7ad21df75b3e3\" returns successfully" Sep 9 23:45:14.831468 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 23:45:14.831627 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 23:45:15.172258 kubelet[3474]: I0909 23:45:15.172114 3474 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b026207-8920-4e84-86a1-65c3e05a13ec-whisker-ca-bundle\") pod \"0b026207-8920-4e84-86a1-65c3e05a13ec\" (UID: \"0b026207-8920-4e84-86a1-65c3e05a13ec\") " Sep 9 23:45:15.173805 kubelet[3474]: I0909 23:45:15.172935 3474 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvt57\" (UniqueName: \"kubernetes.io/projected/0b026207-8920-4e84-86a1-65c3e05a13ec-kube-api-access-gvt57\") pod \"0b026207-8920-4e84-86a1-65c3e05a13ec\" (UID: \"0b026207-8920-4e84-86a1-65c3e05a13ec\") " Sep 9 23:45:15.173805 kubelet[3474]: I0909 23:45:15.172995 3474 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0b026207-8920-4e84-86a1-65c3e05a13ec-whisker-backend-key-pair\") pod \"0b026207-8920-4e84-86a1-65c3e05a13ec\" (UID: \"0b026207-8920-4e84-86a1-65c3e05a13ec\") " Sep 9 23:45:15.173805 kubelet[3474]: I0909 23:45:15.173025 3474 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b026207-8920-4e84-86a1-65c3e05a13ec-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0b026207-8920-4e84-86a1-65c3e05a13ec" (UID: "0b026207-8920-4e84-86a1-65c3e05a13ec"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 23:45:15.174495 kubelet[3474]: I0909 23:45:15.174373 3474 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b026207-8920-4e84-86a1-65c3e05a13ec-whisker-ca-bundle\") on node \"ip-172-31-26-206\" DevicePath \"\"" Sep 9 23:45:15.187438 kubelet[3474]: I0909 23:45:15.187327 3474 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b026207-8920-4e84-86a1-65c3e05a13ec-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0b026207-8920-4e84-86a1-65c3e05a13ec" (UID: "0b026207-8920-4e84-86a1-65c3e05a13ec"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 23:45:15.192380 kubelet[3474]: I0909 23:45:15.192284 3474 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b026207-8920-4e84-86a1-65c3e05a13ec-kube-api-access-gvt57" (OuterVolumeSpecName: "kube-api-access-gvt57") pod "0b026207-8920-4e84-86a1-65c3e05a13ec" (UID: "0b026207-8920-4e84-86a1-65c3e05a13ec"). InnerVolumeSpecName "kube-api-access-gvt57". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:45:15.205640 kubelet[3474]: I0909 23:45:15.204900 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-795hw" podStartSLOduration=1.938141861 podStartE2EDuration="21.204868916s" podCreationTimestamp="2025-09-09 23:44:54 +0000 UTC" firstStartedPulling="2025-09-09 23:44:55.0900138 +0000 UTC m=+30.547795089" lastFinishedPulling="2025-09-09 23:45:14.356740855 +0000 UTC m=+49.814522144" observedRunningTime="2025-09-09 23:45:15.201708452 +0000 UTC m=+50.659489777" watchObservedRunningTime="2025-09-09 23:45:15.204868916 +0000 UTC m=+50.662650241" Sep 9 23:45:15.278040 kubelet[3474]: I0909 23:45:15.277642 3474 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gvt57\" (UniqueName: \"kubernetes.io/projected/0b026207-8920-4e84-86a1-65c3e05a13ec-kube-api-access-gvt57\") on node \"ip-172-31-26-206\" DevicePath \"\"" Sep 9 23:45:15.279230 kubelet[3474]: I0909 23:45:15.278410 3474 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0b026207-8920-4e84-86a1-65c3e05a13ec-whisker-backend-key-pair\") on node \"ip-172-31-26-206\" DevicePath \"\"" Sep 9 23:45:15.293040 systemd[1]: var-lib-kubelet-pods-0b026207\x2d8920\x2d4e84\x2d86a1\x2d65c3e05a13ec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgvt57.mount: Deactivated successfully. Sep 9 23:45:15.294962 systemd[1]: var-lib-kubelet-pods-0b026207\x2d8920\x2d4e84\x2d86a1\x2d65c3e05a13ec-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 23:45:15.408170 kubelet[3474]: I0909 23:45:15.407520 3474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:45:15.454961 systemd[1]: Removed slice kubepods-besteffort-pod0b026207_8920_4e84_86a1_65c3e05a13ec.slice - libcontainer container kubepods-besteffort-pod0b026207_8920_4e84_86a1_65c3e05a13ec.slice. Sep 9 23:45:15.611436 systemd[1]: Created slice kubepods-besteffort-podc4b04bee_45cd_4e94_969d_d0060302d591.slice - libcontainer container kubepods-besteffort-podc4b04bee_45cd_4e94_969d_d0060302d591.slice. Sep 9 23:45:15.680733 kubelet[3474]: I0909 23:45:15.680659 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p96gf\" (UniqueName: \"kubernetes.io/projected/c4b04bee-45cd-4e94-969d-d0060302d591-kube-api-access-p96gf\") pod \"whisker-65ccd9bff6-zsr6d\" (UID: \"c4b04bee-45cd-4e94-969d-d0060302d591\") " pod="calico-system/whisker-65ccd9bff6-zsr6d" Sep 9 23:45:15.680922 kubelet[3474]: I0909 23:45:15.680775 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4b04bee-45cd-4e94-969d-d0060302d591-whisker-backend-key-pair\") pod \"whisker-65ccd9bff6-zsr6d\" (UID: \"c4b04bee-45cd-4e94-969d-d0060302d591\") " pod="calico-system/whisker-65ccd9bff6-zsr6d" Sep 9 23:45:15.680922 kubelet[3474]: I0909 23:45:15.680822 3474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4b04bee-45cd-4e94-969d-d0060302d591-whisker-ca-bundle\") pod \"whisker-65ccd9bff6-zsr6d\" (UID: \"c4b04bee-45cd-4e94-969d-d0060302d591\") " pod="calico-system/whisker-65ccd9bff6-zsr6d" Sep 9 23:45:15.921536 containerd[2031]: time="2025-09-09T23:45:15.921341495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65ccd9bff6-zsr6d,Uid:c4b04bee-45cd-4e94-969d-d0060302d591,Namespace:calico-system,Attempt:0,}" Sep 9 23:45:15.999736 containerd[2031]: time="2025-09-09T23:45:15.999664212Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52a8ddb29f2b8011356b1cc49471bf287ee1724a59b958d7e7d7ad21df75b3e3\" id:\"53556682093f4d5252979cab68914c82265e88110cb55c0ba361da4647015d38\" pid:4539 exit_status:1 exited_at:{seconds:1757461515 nanos:997505436}" Sep 9 23:45:16.317545 (udev-worker)[4512]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:45:16.318617 systemd-networkd[1882]: calic05b132f456: Link UP Sep 9 23:45:16.319012 systemd-networkd[1882]: calic05b132f456: Gained carrier Sep 9 23:45:16.360040 containerd[2031]: 2025-09-09 23:45:16.011 [INFO][4556] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 23:45:16.360040 containerd[2031]: 2025-09-09 23:45:16.099 [INFO][4556] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--206-k8s-whisker--65ccd9bff6--zsr6d-eth0 whisker-65ccd9bff6- calico-system c4b04bee-45cd-4e94-969d-d0060302d591 916 0 2025-09-09 23:45:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65ccd9bff6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-26-206 whisker-65ccd9bff6-zsr6d eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic05b132f456 [] [] }} ContainerID="4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" Namespace="calico-system" Pod="whisker-65ccd9bff6-zsr6d" WorkloadEndpoint="ip--172--31--26--206-k8s-whisker--65ccd9bff6--zsr6d-" Sep 9 23:45:16.360040 containerd[2031]: 2025-09-09 23:45:16.099 [INFO][4556] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" Namespace="calico-system" Pod="whisker-65ccd9bff6-zsr6d" WorkloadEndpoint="ip--172--31--26--206-k8s-whisker--65ccd9bff6--zsr6d-eth0" Sep 9 23:45:16.360040 containerd[2031]: 2025-09-09 23:45:16.211 [INFO][4578] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" HandleID="k8s-pod-network.4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" Workload="ip--172--31--26--206-k8s-whisker--65ccd9bff6--zsr6d-eth0" Sep 9 23:45:16.360526 containerd[2031]: 2025-09-09 23:45:16.211 [INFO][4578] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" HandleID="k8s-pod-network.4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" Workload="ip--172--31--26--206-k8s-whisker--65ccd9bff6--zsr6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000345120), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-206", "pod":"whisker-65ccd9bff6-zsr6d", "timestamp":"2025-09-09 23:45:16.211698945 +0000 UTC"}, Hostname:"ip-172-31-26-206", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:45:16.360526 containerd[2031]: 2025-09-09 23:45:16.212 [INFO][4578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:45:16.360526 containerd[2031]: 2025-09-09 23:45:16.212 [INFO][4578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:45:16.360526 containerd[2031]: 2025-09-09 23:45:16.212 [INFO][4578] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-206' Sep 9 23:45:16.360526 containerd[2031]: 2025-09-09 23:45:16.232 [INFO][4578] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" host="ip-172-31-26-206" Sep 9 23:45:16.360526 containerd[2031]: 2025-09-09 23:45:16.246 [INFO][4578] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-206" Sep 9 23:45:16.360526 containerd[2031]: 2025-09-09 23:45:16.254 [INFO][4578] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:16.360526 containerd[2031]: 2025-09-09 23:45:16.257 [INFO][4578] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:16.360526 containerd[2031]: 2025-09-09 23:45:16.262 [INFO][4578] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:16.360526 containerd[2031]: 2025-09-09 23:45:16.262 [INFO][4578] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" host="ip-172-31-26-206" Sep 9 23:45:16.361019 containerd[2031]: 2025-09-09 23:45:16.265 [INFO][4578] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0 Sep 9 23:45:16.361019 containerd[2031]: 2025-09-09 23:45:16.275 [INFO][4578] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" host="ip-172-31-26-206" Sep 9 23:45:16.361019 containerd[2031]: 2025-09-09 23:45:16.284 [INFO][4578] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.1/26] block=192.168.8.0/26 handle="k8s-pod-network.4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" host="ip-172-31-26-206" Sep 9 23:45:16.361019 containerd[2031]: 2025-09-09 23:45:16.284 [INFO][4578] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.1/26] handle="k8s-pod-network.4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" host="ip-172-31-26-206" Sep 9 23:45:16.361019 containerd[2031]: 2025-09-09 23:45:16.284 [INFO][4578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:45:16.361019 containerd[2031]: 2025-09-09 23:45:16.284 [INFO][4578] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.1/26] IPv6=[] ContainerID="4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" HandleID="k8s-pod-network.4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" Workload="ip--172--31--26--206-k8s-whisker--65ccd9bff6--zsr6d-eth0" Sep 9 23:45:16.361383 containerd[2031]: 2025-09-09 23:45:16.298 [INFO][4556] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" Namespace="calico-system" Pod="whisker-65ccd9bff6-zsr6d" WorkloadEndpoint="ip--172--31--26--206-k8s-whisker--65ccd9bff6--zsr6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--206-k8s-whisker--65ccd9bff6--zsr6d-eth0", GenerateName:"whisker-65ccd9bff6-", Namespace:"calico-system", SelfLink:"", UID:"c4b04bee-45cd-4e94-969d-d0060302d591", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 45, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65ccd9bff6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-206", ContainerID:"", Pod:"whisker-65ccd9bff6-zsr6d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.8.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic05b132f456", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:16.361383 containerd[2031]: 2025-09-09 23:45:16.298 [INFO][4556] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.1/32] ContainerID="4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" Namespace="calico-system" Pod="whisker-65ccd9bff6-zsr6d" WorkloadEndpoint="ip--172--31--26--206-k8s-whisker--65ccd9bff6--zsr6d-eth0" Sep 9 23:45:16.361660 containerd[2031]: 2025-09-09 23:45:16.298 [INFO][4556] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic05b132f456 ContainerID="4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" Namespace="calico-system" Pod="whisker-65ccd9bff6-zsr6d" WorkloadEndpoint="ip--172--31--26--206-k8s-whisker--65ccd9bff6--zsr6d-eth0" Sep 9 23:45:16.361660 containerd[2031]: 2025-09-09 23:45:16.321 [INFO][4556] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" Namespace="calico-system" Pod="whisker-65ccd9bff6-zsr6d" WorkloadEndpoint="ip--172--31--26--206-k8s-whisker--65ccd9bff6--zsr6d-eth0" Sep 9 23:45:16.361829 containerd[2031]: 2025-09-09 23:45:16.322 [INFO][4556] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" Namespace="calico-system" Pod="whisker-65ccd9bff6-zsr6d" WorkloadEndpoint="ip--172--31--26--206-k8s-whisker--65ccd9bff6--zsr6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--206-k8s-whisker--65ccd9bff6--zsr6d-eth0", GenerateName:"whisker-65ccd9bff6-", Namespace:"calico-system", SelfLink:"", UID:"c4b04bee-45cd-4e94-969d-d0060302d591", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 45, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65ccd9bff6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-206", ContainerID:"4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0", Pod:"whisker-65ccd9bff6-zsr6d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.8.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic05b132f456", MAC:"8e:05:ac:42:cb:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:16.361953 containerd[2031]: 2025-09-09 23:45:16.351 [INFO][4556] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" Namespace="calico-system" Pod="whisker-65ccd9bff6-zsr6d" WorkloadEndpoint="ip--172--31--26--206-k8s-whisker--65ccd9bff6--zsr6d-eth0" Sep 9 23:45:16.421712 containerd[2031]: time="2025-09-09T23:45:16.421622518Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52a8ddb29f2b8011356b1cc49471bf287ee1724a59b958d7e7d7ad21df75b3e3\" id:\"42d2b296b1f290dbb140a3b878679244787b58545a251527b551ed4dfc0f5b22\" pid:4595 exit_status:1 exited_at:{seconds:1757461516 nanos:418337266}" Sep 9 23:45:16.425149 containerd[2031]: time="2025-09-09T23:45:16.424969006Z" level=info msg="connecting to shim 4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0" address="unix:///run/containerd/s/fafa01ad78b814f1d6be7bab528861ac77d21f0e4550df2eee5b7e92709d9417" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:16.470448 systemd[1]: Started cri-containerd-4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0.scope - libcontainer container 4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0. Sep 9 23:45:16.560470 containerd[2031]: time="2025-09-09T23:45:16.560379562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65ccd9bff6-zsr6d,Uid:c4b04bee-45cd-4e94-969d-d0060302d591,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0\"" Sep 9 23:45:16.565334 containerd[2031]: time="2025-09-09T23:45:16.565273990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 23:45:16.769765 containerd[2031]: time="2025-09-09T23:45:16.769676759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cs7p,Uid:f02a0402-c726-48be-86a1-a888ea61e0f5,Namespace:calico-system,Attempt:0,}" Sep 9 23:45:16.770765 containerd[2031]: time="2025-09-09T23:45:16.770682599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5xd2t,Uid:2b4e4851-12cf-4217-b8be-222300d3bf0b,Namespace:calico-system,Attempt:0,}" Sep 9 23:45:16.777953 kubelet[3474]: I0909 23:45:16.777592 3474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b026207-8920-4e84-86a1-65c3e05a13ec" path="/var/lib/kubelet/pods/0b026207-8920-4e84-86a1-65c3e05a13ec/volumes" Sep 9 23:45:17.041854 systemd-networkd[1882]: califfe4564dade: Link UP Sep 9 23:45:17.046702 systemd-networkd[1882]: califfe4564dade: Gained carrier Sep 9 23:45:17.095445 containerd[2031]: 2025-09-09 23:45:16.844 [INFO][4660] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 23:45:17.095445 containerd[2031]: 2025-09-09 23:45:16.873 [INFO][4660] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--206-k8s-csi--node--driver--7cs7p-eth0 csi-node-driver- calico-system f02a0402-c726-48be-86a1-a888ea61e0f5 691 0 2025-09-09 23:44:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-26-206 csi-node-driver-7cs7p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califfe4564dade [] [] }} ContainerID="6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" Namespace="calico-system" Pod="csi-node-driver-7cs7p" WorkloadEndpoint="ip--172--31--26--206-k8s-csi--node--driver--7cs7p-" Sep 9 23:45:17.095445 containerd[2031]: 2025-09-09 23:45:16.873 [INFO][4660] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" Namespace="calico-system" Pod="csi-node-driver-7cs7p" WorkloadEndpoint="ip--172--31--26--206-k8s-csi--node--driver--7cs7p-eth0" Sep 9 23:45:17.095445 containerd[2031]: 2025-09-09 23:45:16.956 [INFO][4683] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" HandleID="k8s-pod-network.6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" Workload="ip--172--31--26--206-k8s-csi--node--driver--7cs7p-eth0" Sep 9 23:45:17.097044 containerd[2031]: 2025-09-09 23:45:16.957 [INFO][4683] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" HandleID="k8s-pod-network.6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" Workload="ip--172--31--26--206-k8s-csi--node--driver--7cs7p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400037a100), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-206", "pod":"csi-node-driver-7cs7p", "timestamp":"2025-09-09 23:45:16.956687172 +0000 UTC"}, Hostname:"ip-172-31-26-206", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:45:17.097044 containerd[2031]: 2025-09-09 23:45:16.957 [INFO][4683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:45:17.097044 containerd[2031]: 2025-09-09 23:45:16.957 [INFO][4683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:45:17.097044 containerd[2031]: 2025-09-09 23:45:16.957 [INFO][4683] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-206' Sep 9 23:45:17.097044 containerd[2031]: 2025-09-09 23:45:16.973 [INFO][4683] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" host="ip-172-31-26-206" Sep 9 23:45:17.097044 containerd[2031]: 2025-09-09 23:45:16.981 [INFO][4683] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-206" Sep 9 23:45:17.097044 containerd[2031]: 2025-09-09 23:45:16.988 [INFO][4683] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:17.097044 containerd[2031]: 2025-09-09 23:45:16.992 [INFO][4683] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:17.097044 containerd[2031]: 2025-09-09 23:45:16.996 [INFO][4683] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:17.097044 containerd[2031]: 2025-09-09 23:45:16.997 [INFO][4683] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" host="ip-172-31-26-206" Sep 9 23:45:17.097695 containerd[2031]: 2025-09-09 23:45:17.000 [INFO][4683] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc Sep 9 23:45:17.097695 containerd[2031]: 2025-09-09 23:45:17.011 [INFO][4683] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" host="ip-172-31-26-206" Sep 9 23:45:17.097695 containerd[2031]: 2025-09-09 23:45:17.028 [INFO][4683] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.2/26] block=192.168.8.0/26 handle="k8s-pod-network.6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" host="ip-172-31-26-206" Sep 9 23:45:17.097695 containerd[2031]: 2025-09-09 23:45:17.028 [INFO][4683] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.2/26] handle="k8s-pod-network.6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" host="ip-172-31-26-206" Sep 9 23:45:17.097695 containerd[2031]: 2025-09-09 23:45:17.028 [INFO][4683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:45:17.097695 containerd[2031]: 2025-09-09 23:45:17.029 [INFO][4683] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.2/26] IPv6=[] ContainerID="6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" HandleID="k8s-pod-network.6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" Workload="ip--172--31--26--206-k8s-csi--node--driver--7cs7p-eth0" Sep 9 23:45:17.097949 containerd[2031]: 2025-09-09 23:45:17.036 [INFO][4660] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" Namespace="calico-system" Pod="csi-node-driver-7cs7p" WorkloadEndpoint="ip--172--31--26--206-k8s-csi--node--driver--7cs7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--206-k8s-csi--node--driver--7cs7p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f02a0402-c726-48be-86a1-a888ea61e0f5", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-206", ContainerID:"", Pod:"csi-node-driver-7cs7p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califfe4564dade", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:17.099761 containerd[2031]: 2025-09-09 23:45:17.036 [INFO][4660] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.2/32] ContainerID="6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" Namespace="calico-system" Pod="csi-node-driver-7cs7p" WorkloadEndpoint="ip--172--31--26--206-k8s-csi--node--driver--7cs7p-eth0" Sep 9 23:45:17.099761 containerd[2031]: 2025-09-09 23:45:17.036 [INFO][4660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfe4564dade ContainerID="6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" Namespace="calico-system" Pod="csi-node-driver-7cs7p" WorkloadEndpoint="ip--172--31--26--206-k8s-csi--node--driver--7cs7p-eth0" Sep 9 23:45:17.099761 containerd[2031]: 2025-09-09 23:45:17.054 [INFO][4660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" Namespace="calico-system" Pod="csi-node-driver-7cs7p" WorkloadEndpoint="ip--172--31--26--206-k8s-csi--node--driver--7cs7p-eth0" Sep 9 23:45:17.099956 containerd[2031]: 2025-09-09 23:45:17.055 [INFO][4660] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" Namespace="calico-system" Pod="csi-node-driver-7cs7p" WorkloadEndpoint="ip--172--31--26--206-k8s-csi--node--driver--7cs7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--206-k8s-csi--node--driver--7cs7p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f02a0402-c726-48be-86a1-a888ea61e0f5", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-206", ContainerID:"6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc", Pod:"csi-node-driver-7cs7p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califfe4564dade", MAC:"6a:f6:3a:09:cc:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:17.100072 containerd[2031]: 2025-09-09 23:45:17.090 [INFO][4660] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" Namespace="calico-system" Pod="csi-node-driver-7cs7p" WorkloadEndpoint="ip--172--31--26--206-k8s-csi--node--driver--7cs7p-eth0" Sep 9 23:45:17.188700 systemd-networkd[1882]: calibf9a46868d3: Link UP Sep 9 23:45:17.196077 systemd-networkd[1882]: calibf9a46868d3: Gained carrier Sep 9 23:45:17.210248 containerd[2031]: time="2025-09-09T23:45:17.210167326Z" level=info msg="connecting to shim 6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc" address="unix:///run/containerd/s/45e446d62f573931d9fc581ba50b41419c9c541156fcc9e0279c18cf2bf44a7c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:17.254187 containerd[2031]: 2025-09-09 23:45:16.843 [INFO][4663] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 23:45:17.254187 containerd[2031]: 2025-09-09 23:45:16.875 [INFO][4663] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--206-k8s-goldmane--54d579b49d--5xd2t-eth0 goldmane-54d579b49d- calico-system 2b4e4851-12cf-4217-b8be-222300d3bf0b 844 0 2025-09-09 23:44:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-26-206 goldmane-54d579b49d-5xd2t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibf9a46868d3 [] [] }} ContainerID="a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" Namespace="calico-system" Pod="goldmane-54d579b49d-5xd2t" WorkloadEndpoint="ip--172--31--26--206-k8s-goldmane--54d579b49d--5xd2t-" Sep 9 23:45:17.254187 containerd[2031]: 2025-09-09 23:45:16.877 [INFO][4663] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" Namespace="calico-system" Pod="goldmane-54d579b49d-5xd2t" WorkloadEndpoint="ip--172--31--26--206-k8s-goldmane--54d579b49d--5xd2t-eth0" Sep 9 23:45:17.254187 containerd[2031]: 2025-09-09 23:45:16.956 [INFO][4684] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" HandleID="k8s-pod-network.a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" Workload="ip--172--31--26--206-k8s-goldmane--54d579b49d--5xd2t-eth0" Sep 9 23:45:17.254535 containerd[2031]: 2025-09-09 23:45:16.957 [INFO][4684] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" HandleID="k8s-pod-network.a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" Workload="ip--172--31--26--206-k8s-goldmane--54d579b49d--5xd2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038f9f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-206", "pod":"goldmane-54d579b49d-5xd2t", "timestamp":"2025-09-09 23:45:16.95668134 +0000 UTC"}, Hostname:"ip-172-31-26-206", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:45:17.254535 containerd[2031]: 2025-09-09 23:45:16.957 [INFO][4684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:45:17.254535 containerd[2031]: 2025-09-09 23:45:17.028 [INFO][4684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:45:17.254535 containerd[2031]: 2025-09-09 23:45:17.029 [INFO][4684] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-206' Sep 9 23:45:17.254535 containerd[2031]: 2025-09-09 23:45:17.073 [INFO][4684] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" host="ip-172-31-26-206" Sep 9 23:45:17.254535 containerd[2031]: 2025-09-09 23:45:17.092 [INFO][4684] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-206" Sep 9 23:45:17.254535 containerd[2031]: 2025-09-09 23:45:17.101 [INFO][4684] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:17.254535 containerd[2031]: 2025-09-09 23:45:17.107 [INFO][4684] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:17.254535 containerd[2031]: 2025-09-09 23:45:17.112 [INFO][4684] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:17.254941 containerd[2031]: 2025-09-09 23:45:17.112 [INFO][4684] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" host="ip-172-31-26-206" Sep 9 23:45:17.254941 containerd[2031]: 2025-09-09 23:45:17.118 [INFO][4684] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3 Sep 9 23:45:17.254941 containerd[2031]: 2025-09-09 23:45:17.128 [INFO][4684] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" host="ip-172-31-26-206" Sep 9 23:45:17.254941 containerd[2031]: 2025-09-09 23:45:17.149 [INFO][4684] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.3/26] block=192.168.8.0/26 handle="k8s-pod-network.a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" host="ip-172-31-26-206" Sep 9 23:45:17.254941 containerd[2031]: 2025-09-09 23:45:17.149 [INFO][4684] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.3/26] handle="k8s-pod-network.a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" host="ip-172-31-26-206" Sep 9 23:45:17.254941 containerd[2031]: 2025-09-09 23:45:17.150 [INFO][4684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:45:17.254941 containerd[2031]: 2025-09-09 23:45:17.150 [INFO][4684] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.3/26] IPv6=[] ContainerID="a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" HandleID="k8s-pod-network.a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" Workload="ip--172--31--26--206-k8s-goldmane--54d579b49d--5xd2t-eth0" Sep 9 23:45:17.257573 containerd[2031]: 2025-09-09 23:45:17.168 [INFO][4663] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" Namespace="calico-system" Pod="goldmane-54d579b49d-5xd2t" WorkloadEndpoint="ip--172--31--26--206-k8s-goldmane--54d579b49d--5xd2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--206-k8s-goldmane--54d579b49d--5xd2t-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"2b4e4851-12cf-4217-b8be-222300d3bf0b", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-206", ContainerID:"", Pod:"goldmane-54d579b49d-5xd2t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.8.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibf9a46868d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:17.257573 containerd[2031]: 2025-09-09 23:45:17.169 [INFO][4663] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.3/32] ContainerID="a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" Namespace="calico-system" Pod="goldmane-54d579b49d-5xd2t" WorkloadEndpoint="ip--172--31--26--206-k8s-goldmane--54d579b49d--5xd2t-eth0" Sep 9 23:45:17.257766 containerd[2031]: 2025-09-09 23:45:17.169 [INFO][4663] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf9a46868d3 ContainerID="a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" Namespace="calico-system" Pod="goldmane-54d579b49d-5xd2t" WorkloadEndpoint="ip--172--31--26--206-k8s-goldmane--54d579b49d--5xd2t-eth0" Sep 9 23:45:17.257766 containerd[2031]: 2025-09-09 23:45:17.205 [INFO][4663] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" Namespace="calico-system" Pod="goldmane-54d579b49d-5xd2t" WorkloadEndpoint="ip--172--31--26--206-k8s-goldmane--54d579b49d--5xd2t-eth0" Sep 9 23:45:17.257868 containerd[2031]: 2025-09-09 23:45:17.219 [INFO][4663] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" Namespace="calico-system" Pod="goldmane-54d579b49d-5xd2t" WorkloadEndpoint="ip--172--31--26--206-k8s-goldmane--54d579b49d--5xd2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--206-k8s-goldmane--54d579b49d--5xd2t-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"2b4e4851-12cf-4217-b8be-222300d3bf0b", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-206", ContainerID:"a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3", Pod:"goldmane-54d579b49d-5xd2t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.8.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibf9a46868d3", MAC:"ee:77:0c:4a:3e:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:17.257985 containerd[2031]: 2025-09-09 23:45:17.243 [INFO][4663] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" Namespace="calico-system" Pod="goldmane-54d579b49d-5xd2t" WorkloadEndpoint="ip--172--31--26--206-k8s-goldmane--54d579b49d--5xd2t-eth0" Sep 9 23:45:17.323807 systemd[1]: Started cri-containerd-6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc.scope - libcontainer container 6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc. Sep 9 23:45:17.364364 containerd[2031]: time="2025-09-09T23:45:17.364292266Z" level=info msg="connecting to shim a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3" address="unix:///run/containerd/s/e693d93797706291d47a9d73b054706f70ba83e2e3c2927efe7ac74355063be4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:17.468658 systemd[1]: Started cri-containerd-a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3.scope - libcontainer container a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3. Sep 9 23:45:17.492934 containerd[2031]: time="2025-09-09T23:45:17.492832919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cs7p,Uid:f02a0402-c726-48be-86a1-a888ea61e0f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc\"" Sep 9 23:45:17.663114 containerd[2031]: time="2025-09-09T23:45:17.662949348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5xd2t,Uid:2b4e4851-12cf-4217-b8be-222300d3bf0b,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3\"" Sep 9 23:45:17.772210 containerd[2031]: time="2025-09-09T23:45:17.770636016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q5bjz,Uid:63153ad7-dcff-4576-b8ff-089f3921a547,Namespace:kube-system,Attempt:0,}" Sep 9 23:45:17.777377 systemd-networkd[1882]: calic05b132f456: Gained IPv6LL Sep 9 23:45:17.788384 containerd[2031]: time="2025-09-09T23:45:17.787517892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcccfb84d-jkhpk,Uid:ab0c5703-81da-4d19-b1df-7707bee03dbd,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:45:18.160403 systemd-networkd[1882]: califfe4564dade: Gained IPv6LL Sep 9 23:45:18.337352 containerd[2031]: time="2025-09-09T23:45:18.337287083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:18.340631 containerd[2031]: time="2025-09-09T23:45:18.340554515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 9 23:45:18.342697 containerd[2031]: time="2025-09-09T23:45:18.341810351Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:18.370652 containerd[2031]: time="2025-09-09T23:45:18.370558079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:18.374782 containerd[2031]: time="2025-09-09T23:45:18.374439131Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.808979297s" Sep 9 23:45:18.376435 containerd[2031]: time="2025-09-09T23:45:18.376080791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 9 23:45:18.381334 containerd[2031]: time="2025-09-09T23:45:18.381097115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 23:45:18.391213 containerd[2031]: time="2025-09-09T23:45:18.390762383Z" level=info msg="CreateContainer within sandbox \"4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 23:45:18.426354 containerd[2031]: time="2025-09-09T23:45:18.423771240Z" level=info msg="Container 9d7b5412c0287a8bd6093e5dc56ee1b1f9a9bf072767ccca4a3e3a7663ddb111: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:18.449709 containerd[2031]: time="2025-09-09T23:45:18.449445240Z" level=info msg="CreateContainer within sandbox \"4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9d7b5412c0287a8bd6093e5dc56ee1b1f9a9bf072767ccca4a3e3a7663ddb111\"" Sep 9 23:45:18.452501 containerd[2031]: time="2025-09-09T23:45:18.452424576Z" level=info msg="StartContainer for \"9d7b5412c0287a8bd6093e5dc56ee1b1f9a9bf072767ccca4a3e3a7663ddb111\"" Sep 9 23:45:18.458403 containerd[2031]: time="2025-09-09T23:45:18.458331828Z" level=info msg="connecting to shim 9d7b5412c0287a8bd6093e5dc56ee1b1f9a9bf072767ccca4a3e3a7663ddb111" address="unix:///run/containerd/s/fafa01ad78b814f1d6be7bab528861ac77d21f0e4550df2eee5b7e92709d9417" protocol=ttrpc version=3 Sep 9 23:45:18.522431 systemd[1]: Started cri-containerd-9d7b5412c0287a8bd6093e5dc56ee1b1f9a9bf072767ccca4a3e3a7663ddb111.scope - libcontainer container 9d7b5412c0287a8bd6093e5dc56ee1b1f9a9bf072767ccca4a3e3a7663ddb111. Sep 9 23:45:18.552738 systemd-networkd[1882]: cali14588315ab1: Link UP Sep 9 23:45:18.554501 systemd-networkd[1882]: cali14588315ab1: Gained carrier Sep 9 23:45:18.618968 containerd[2031]: 2025-09-09 23:45:17.980 [INFO][4890] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 23:45:18.618968 containerd[2031]: 2025-09-09 23:45:18.062 [INFO][4890] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--206-k8s-coredns--674b8bbfcf--q5bjz-eth0 coredns-674b8bbfcf- kube-system 63153ad7-dcff-4576-b8ff-089f3921a547 842 0 2025-09-09 23:44:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-206 coredns-674b8bbfcf-q5bjz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali14588315ab1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" Namespace="kube-system" Pod="coredns-674b8bbfcf-q5bjz" WorkloadEndpoint="ip--172--31--26--206-k8s-coredns--674b8bbfcf--q5bjz-" Sep 9 23:45:18.618968 containerd[2031]: 2025-09-09 23:45:18.062 [INFO][4890] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" Namespace="kube-system" Pod="coredns-674b8bbfcf-q5bjz" WorkloadEndpoint="ip--172--31--26--206-k8s-coredns--674b8bbfcf--q5bjz-eth0" Sep 9 23:45:18.618968 containerd[2031]: 2025-09-09 23:45:18.322 [INFO][4921] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" HandleID="k8s-pod-network.0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" Workload="ip--172--31--26--206-k8s-coredns--674b8bbfcf--q5bjz-eth0" Sep 9 23:45:18.619431 containerd[2031]: 2025-09-09 23:45:18.324 [INFO][4921] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" HandleID="k8s-pod-network.0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" Workload="ip--172--31--26--206-k8s-coredns--674b8bbfcf--q5bjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330680), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-206", "pod":"coredns-674b8bbfcf-q5bjz", "timestamp":"2025-09-09 23:45:18.322866731 +0000 UTC"}, Hostname:"ip-172-31-26-206", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:45:18.619431 containerd[2031]: 2025-09-09 23:45:18.324 [INFO][4921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:45:18.619431 containerd[2031]: 2025-09-09 23:45:18.325 [INFO][4921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:45:18.619431 containerd[2031]: 2025-09-09 23:45:18.326 [INFO][4921] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-206' Sep 9 23:45:18.619431 containerd[2031]: 2025-09-09 23:45:18.359 [INFO][4921] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" host="ip-172-31-26-206" Sep 9 23:45:18.619431 containerd[2031]: 2025-09-09 23:45:18.380 [INFO][4921] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-206" Sep 9 23:45:18.619431 containerd[2031]: 2025-09-09 23:45:18.417 [INFO][4921] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:18.619431 containerd[2031]: 2025-09-09 23:45:18.429 [INFO][4921] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:18.619431 containerd[2031]: 2025-09-09 23:45:18.436 [INFO][4921] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:18.619431 containerd[2031]: 2025-09-09 23:45:18.436 [INFO][4921] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" host="ip-172-31-26-206" Sep 9 23:45:18.619953 containerd[2031]: 2025-09-09 23:45:18.441 [INFO][4921] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c Sep 9 23:45:18.619953 containerd[2031]: 2025-09-09 23:45:18.467 [INFO][4921] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" host="ip-172-31-26-206" Sep 9 23:45:18.619953 containerd[2031]: 2025-09-09 23:45:18.486 [INFO][4921] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.4/26] block=192.168.8.0/26 handle="k8s-pod-network.0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" host="ip-172-31-26-206" Sep 9 23:45:18.619953 containerd[2031]: 2025-09-09 23:45:18.486 [INFO][4921] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.4/26] handle="k8s-pod-network.0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" host="ip-172-31-26-206" Sep 9 23:45:18.619953 containerd[2031]: 2025-09-09 23:45:18.486 [INFO][4921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:45:18.619953 containerd[2031]: 2025-09-09 23:45:18.486 [INFO][4921] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.4/26] IPv6=[] ContainerID="0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" HandleID="k8s-pod-network.0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" Workload="ip--172--31--26--206-k8s-coredns--674b8bbfcf--q5bjz-eth0" Sep 9 23:45:18.620348 containerd[2031]: 2025-09-09 23:45:18.509 [INFO][4890] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" Namespace="kube-system" Pod="coredns-674b8bbfcf-q5bjz" WorkloadEndpoint="ip--172--31--26--206-k8s-coredns--674b8bbfcf--q5bjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--206-k8s-coredns--674b8bbfcf--q5bjz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"63153ad7-dcff-4576-b8ff-089f3921a547", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-206", ContainerID:"", Pod:"coredns-674b8bbfcf-q5bjz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14588315ab1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:18.620348 containerd[2031]: 2025-09-09 23:45:18.514 [INFO][4890] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.4/32] ContainerID="0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" Namespace="kube-system" Pod="coredns-674b8bbfcf-q5bjz" WorkloadEndpoint="ip--172--31--26--206-k8s-coredns--674b8bbfcf--q5bjz-eth0" Sep 9 23:45:18.620348 containerd[2031]: 2025-09-09 23:45:18.518 [INFO][4890] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14588315ab1 ContainerID="0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" Namespace="kube-system" Pod="coredns-674b8bbfcf-q5bjz" WorkloadEndpoint="ip--172--31--26--206-k8s-coredns--674b8bbfcf--q5bjz-eth0" Sep 9 23:45:18.620348 containerd[2031]: 2025-09-09 23:45:18.567 [INFO][4890] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" Namespace="kube-system" Pod="coredns-674b8bbfcf-q5bjz" WorkloadEndpoint="ip--172--31--26--206-k8s-coredns--674b8bbfcf--q5bjz-eth0" Sep 9 23:45:18.620348 containerd[2031]: 2025-09-09 23:45:18.571 [INFO][4890] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" Namespace="kube-system" Pod="coredns-674b8bbfcf-q5bjz" WorkloadEndpoint="ip--172--31--26--206-k8s-coredns--674b8bbfcf--q5bjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--206-k8s-coredns--674b8bbfcf--q5bjz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"63153ad7-dcff-4576-b8ff-089f3921a547", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-206", ContainerID:"0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c", Pod:"coredns-674b8bbfcf-q5bjz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14588315ab1", MAC:"2e:41:b5:42:7d:0c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:18.620348 containerd[2031]: 2025-09-09 23:45:18.602 [INFO][4890] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" Namespace="kube-system" Pod="coredns-674b8bbfcf-q5bjz" WorkloadEndpoint="ip--172--31--26--206-k8s-coredns--674b8bbfcf--q5bjz-eth0" Sep 9 23:45:18.688722 containerd[2031]: time="2025-09-09T23:45:18.688505893Z" level=info msg="connecting to shim 0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c" address="unix:///run/containerd/s/4eda9805bbee9680e28e8f9cd4f85c62b7e7bb51d008418745edcaad3be9b6ef" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:18.762008 systemd-networkd[1882]: cali2f5f267e35f: Link UP Sep 9 23:45:18.788580 systemd-networkd[1882]: cali2f5f267e35f: Gained carrier Sep 9 23:45:18.833442 systemd-networkd[1882]: calibf9a46868d3: Gained IPv6LL Sep 9 23:45:18.847175 systemd[1]: Started cri-containerd-0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c.scope - libcontainer container 0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c. Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.002 [INFO][4901] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.070 [INFO][4901] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--jkhpk-eth0 calico-apiserver-7fcccfb84d- calico-apiserver ab0c5703-81da-4d19-b1df-7707bee03dbd 846 0 2025-09-09 23:44:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fcccfb84d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-206 calico-apiserver-7fcccfb84d-jkhpk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2f5f267e35f [] [] }} ContainerID="2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" Namespace="calico-apiserver" Pod="calico-apiserver-7fcccfb84d-jkhpk" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--jkhpk-" Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.070 [INFO][4901] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" Namespace="calico-apiserver" Pod="calico-apiserver-7fcccfb84d-jkhpk" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--jkhpk-eth0" Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.332 [INFO][4926] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" HandleID="k8s-pod-network.2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" Workload="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--jkhpk-eth0" Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.335 [INFO][4926] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" HandleID="k8s-pod-network.2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" Workload="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--jkhpk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-206", "pod":"calico-apiserver-7fcccfb84d-jkhpk", "timestamp":"2025-09-09 23:45:18.331980815 +0000 UTC"}, Hostname:"ip-172-31-26-206", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.336 [INFO][4926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.487 [INFO][4926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.487 [INFO][4926] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-206' Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.566 [INFO][4926] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" host="ip-172-31-26-206" Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.611 [INFO][4926] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-206" Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.636 [INFO][4926] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.649 [INFO][4926] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.666 [INFO][4926] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.666 [INFO][4926] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" host="ip-172-31-26-206" Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.674 [INFO][4926] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.694 [INFO][4926] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" host="ip-172-31-26-206" Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.720 [INFO][4926] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.5/26] block=192.168.8.0/26 handle="k8s-pod-network.2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" host="ip-172-31-26-206" Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.721 [INFO][4926] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.5/26] handle="k8s-pod-network.2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" host="ip-172-31-26-206" Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.721 [INFO][4926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:45:18.861907 containerd[2031]: 2025-09-09 23:45:18.723 [INFO][4926] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.5/26] IPv6=[] ContainerID="2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" HandleID="k8s-pod-network.2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" Workload="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--jkhpk-eth0" Sep 9 23:45:18.864373 containerd[2031]: 2025-09-09 23:45:18.738 [INFO][4901] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" Namespace="calico-apiserver" Pod="calico-apiserver-7fcccfb84d-jkhpk" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--jkhpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--jkhpk-eth0", GenerateName:"calico-apiserver-7fcccfb84d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab0c5703-81da-4d19-b1df-7707bee03dbd", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcccfb84d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-206", ContainerID:"", Pod:"calico-apiserver-7fcccfb84d-jkhpk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f5f267e35f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:18.864373 containerd[2031]: 2025-09-09 23:45:18.739 [INFO][4901] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.5/32] ContainerID="2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" Namespace="calico-apiserver" Pod="calico-apiserver-7fcccfb84d-jkhpk" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--jkhpk-eth0" Sep 9 23:45:18.864373 containerd[2031]: 2025-09-09 23:45:18.741 [INFO][4901] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f5f267e35f ContainerID="2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" Namespace="calico-apiserver" Pod="calico-apiserver-7fcccfb84d-jkhpk" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--jkhpk-eth0" Sep 9 23:45:18.864373 containerd[2031]: 2025-09-09 23:45:18.789 [INFO][4901] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" Namespace="calico-apiserver" Pod="calico-apiserver-7fcccfb84d-jkhpk" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--jkhpk-eth0" Sep 9 23:45:18.864373 containerd[2031]: 2025-09-09 23:45:18.794 [INFO][4901] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" Namespace="calico-apiserver" Pod="calico-apiserver-7fcccfb84d-jkhpk" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--jkhpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--jkhpk-eth0", GenerateName:"calico-apiserver-7fcccfb84d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab0c5703-81da-4d19-b1df-7707bee03dbd", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcccfb84d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-206", ContainerID:"2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c", Pod:"calico-apiserver-7fcccfb84d-jkhpk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f5f267e35f", MAC:"12:81:c5:37:2f:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:18.864373 containerd[2031]: 2025-09-09 23:45:18.827 [INFO][4901] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" Namespace="calico-apiserver" Pod="calico-apiserver-7fcccfb84d-jkhpk" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--jkhpk-eth0" Sep 9 23:45:18.888698 containerd[2031]: time="2025-09-09T23:45:18.888631838Z" level=info msg="StartContainer for \"9d7b5412c0287a8bd6093e5dc56ee1b1f9a9bf072767ccca4a3e3a7663ddb111\" returns successfully" Sep 9 23:45:18.958246 containerd[2031]: time="2025-09-09T23:45:18.957833762Z" level=info msg="connecting to shim 2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c" address="unix:///run/containerd/s/66e0c0efbe3a67d0c2df491df7ca8f096ce94f0317535a53656f518b46284615" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:19.076378 systemd[1]: Started cri-containerd-2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c.scope - libcontainer container 2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c. Sep 9 23:45:19.088279 containerd[2031]: time="2025-09-09T23:45:19.088085135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q5bjz,Uid:63153ad7-dcff-4576-b8ff-089f3921a547,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c\"" Sep 9 23:45:19.105225 containerd[2031]: time="2025-09-09T23:45:19.104814215Z" level=info msg="CreateContainer within sandbox \"0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:45:19.135554 containerd[2031]: time="2025-09-09T23:45:19.135481919Z" level=info msg="Container 9ba9da1c3589f6beff3b07305cb33fe125f4fd55db042f4d8b58e013912285b4: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:19.150857 containerd[2031]: time="2025-09-09T23:45:19.150790427Z" level=info msg="CreateContainer within sandbox \"0c639680a5c9b8421b8559450cda33d919fdffd2589b57d03b1cbd382740b27c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9ba9da1c3589f6beff3b07305cb33fe125f4fd55db042f4d8b58e013912285b4\"" Sep 9 23:45:19.153181 containerd[2031]: time="2025-09-09T23:45:19.153104387Z" level=info msg="StartContainer for \"9ba9da1c3589f6beff3b07305cb33fe125f4fd55db042f4d8b58e013912285b4\"" Sep 9 23:45:19.162111 containerd[2031]: time="2025-09-09T23:45:19.162036311Z" level=info msg="connecting to shim 9ba9da1c3589f6beff3b07305cb33fe125f4fd55db042f4d8b58e013912285b4" address="unix:///run/containerd/s/4eda9805bbee9680e28e8f9cd4f85c62b7e7bb51d008418745edcaad3be9b6ef" protocol=ttrpc version=3 Sep 9 23:45:19.278603 systemd[1]: Started cri-containerd-9ba9da1c3589f6beff3b07305cb33fe125f4fd55db042f4d8b58e013912285b4.scope - libcontainer container 9ba9da1c3589f6beff3b07305cb33fe125f4fd55db042f4d8b58e013912285b4. Sep 9 23:45:19.290931 containerd[2031]: time="2025-09-09T23:45:19.290800500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcccfb84d-jkhpk,Uid:ab0c5703-81da-4d19-b1df-7707bee03dbd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c\"" Sep 9 23:45:19.407445 containerd[2031]: time="2025-09-09T23:45:19.407399017Z" level=info msg="StartContainer for \"9ba9da1c3589f6beff3b07305cb33fe125f4fd55db042f4d8b58e013912285b4\" returns successfully" Sep 9 23:45:19.770605 containerd[2031]: time="2025-09-09T23:45:19.770477870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2h7d,Uid:385d8235-6e7a-4851-9a04-a461b1d648de,Namespace:kube-system,Attempt:0,}" Sep 9 23:45:19.771107 containerd[2031]: time="2025-09-09T23:45:19.771056450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcccfb84d-vl5gd,Uid:23f2cb82-35f9-49b6-b5c8-108004fea67a,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:45:20.238907 (udev-worker)[4513]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:45:20.324562 systemd-networkd[1882]: vxlan.calico: Link UP Sep 9 23:45:20.324583 systemd-networkd[1882]: vxlan.calico: Gained carrier Sep 9 23:45:20.370928 kubelet[3474]: I0909 23:45:20.370759 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-q5bjz" podStartSLOduration=49.370732981 podStartE2EDuration="49.370732981s" podCreationTimestamp="2025-09-09 23:44:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:45:20.360433309 +0000 UTC m=+55.818214646" watchObservedRunningTime="2025-09-09 23:45:20.370732981 +0000 UTC m=+55.828514270" Sep 9 23:45:20.400352 systemd-networkd[1882]: cali14588315ab1: Gained IPv6LL Sep 9 23:45:20.461550 containerd[2031]: time="2025-09-09T23:45:20.461490362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:20.464169 containerd[2031]: time="2025-09-09T23:45:20.463518698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 9 23:45:20.467174 containerd[2031]: time="2025-09-09T23:45:20.466390202Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:20.482023 containerd[2031]: time="2025-09-09T23:45:20.481859354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:20.502310 containerd[2031]: time="2025-09-09T23:45:20.500895674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 2.119597643s" Sep 9 23:45:20.502310 containerd[2031]: time="2025-09-09T23:45:20.500961110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 9 23:45:20.505541 systemd-networkd[1882]: cali031308abafe: Link UP Sep 9 23:45:20.510318 systemd-networkd[1882]: cali031308abafe: Gained carrier Sep 9 23:45:20.526091 containerd[2031]: time="2025-09-09T23:45:20.524224082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 23:45:20.528511 systemd-networkd[1882]: cali2f5f267e35f: Gained IPv6LL Sep 9 23:45:20.539084 containerd[2031]: time="2025-09-09T23:45:20.538269722Z" level=info msg="CreateContainer within sandbox \"6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 23:45:20.574206 containerd[2031]: time="2025-09-09T23:45:20.573431066Z" level=info msg="Container fb9baccec2b91d577acfb9aebe9d227c4ff76364e41e0a0e826178516f45a0c7: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.022 [INFO][5152] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--vl5gd-eth0 calico-apiserver-7fcccfb84d- calico-apiserver 23f2cb82-35f9-49b6-b5c8-108004fea67a 848 0 2025-09-09 23:44:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fcccfb84d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-206 calico-apiserver-7fcccfb84d-vl5gd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali031308abafe [] [] }} ContainerID="2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" Namespace="calico-apiserver" Pod="calico-apiserver-7fcccfb84d-vl5gd" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--vl5gd-" Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.025 [INFO][5152] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" Namespace="calico-apiserver" Pod="calico-apiserver-7fcccfb84d-vl5gd" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--vl5gd-eth0" Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.230 [INFO][5189] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" HandleID="k8s-pod-network.2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" Workload="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--vl5gd-eth0" Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.231 [INFO][5189] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" HandleID="k8s-pod-network.2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" Workload="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--vl5gd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039ace0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-206", "pod":"calico-apiserver-7fcccfb84d-vl5gd", "timestamp":"2025-09-09 23:45:20.230357749 +0000 UTC"}, Hostname:"ip-172-31-26-206", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.232 [INFO][5189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.232 [INFO][5189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.232 [INFO][5189] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-206' Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.302 [INFO][5189] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" host="ip-172-31-26-206" Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.341 [INFO][5189] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-206" Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.380 [INFO][5189] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.409 [INFO][5189] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.422 [INFO][5189] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.423 [INFO][5189] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" host="ip-172-31-26-206" Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.437 [INFO][5189] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300 Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.447 [INFO][5189] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" host="ip-172-31-26-206" Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.467 [INFO][5189] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.6/26] block=192.168.8.0/26 handle="k8s-pod-network.2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" host="ip-172-31-26-206" Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.468 [INFO][5189] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.6/26] handle="k8s-pod-network.2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" host="ip-172-31-26-206" Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.470 [INFO][5189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:45:20.589657 containerd[2031]: 2025-09-09 23:45:20.470 [INFO][5189] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.6/26] IPv6=[] ContainerID="2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" HandleID="k8s-pod-network.2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" Workload="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--vl5gd-eth0" Sep 9 23:45:20.592261 containerd[2031]: 2025-09-09 23:45:20.479 [INFO][5152] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" Namespace="calico-apiserver" Pod="calico-apiserver-7fcccfb84d-vl5gd" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--vl5gd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--vl5gd-eth0", GenerateName:"calico-apiserver-7fcccfb84d-", Namespace:"calico-apiserver", SelfLink:"", UID:"23f2cb82-35f9-49b6-b5c8-108004fea67a", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcccfb84d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-206", ContainerID:"", Pod:"calico-apiserver-7fcccfb84d-vl5gd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali031308abafe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:20.592261 containerd[2031]: 2025-09-09 23:45:20.479 [INFO][5152] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.6/32] ContainerID="2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" Namespace="calico-apiserver" Pod="calico-apiserver-7fcccfb84d-vl5gd" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--vl5gd-eth0" Sep 9 23:45:20.592261 containerd[2031]: 2025-09-09 23:45:20.479 [INFO][5152] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali031308abafe ContainerID="2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" Namespace="calico-apiserver" Pod="calico-apiserver-7fcccfb84d-vl5gd" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--vl5gd-eth0" Sep 9 23:45:20.592261 containerd[2031]: 2025-09-09 23:45:20.514 [INFO][5152] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" Namespace="calico-apiserver" Pod="calico-apiserver-7fcccfb84d-vl5gd" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--vl5gd-eth0" Sep 9 23:45:20.592261 containerd[2031]: 2025-09-09 23:45:20.533 [INFO][5152] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" Namespace="calico-apiserver" Pod="calico-apiserver-7fcccfb84d-vl5gd" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--vl5gd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--vl5gd-eth0", GenerateName:"calico-apiserver-7fcccfb84d-", Namespace:"calico-apiserver", SelfLink:"", UID:"23f2cb82-35f9-49b6-b5c8-108004fea67a", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcccfb84d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-206", ContainerID:"2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300", Pod:"calico-apiserver-7fcccfb84d-vl5gd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali031308abafe", MAC:"1a:e5:34:92:a3:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:20.592261 containerd[2031]: 2025-09-09 23:45:20.562 [INFO][5152] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" Namespace="calico-apiserver" Pod="calico-apiserver-7fcccfb84d-vl5gd" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--apiserver--7fcccfb84d--vl5gd-eth0" Sep 9 23:45:20.632971 containerd[2031]: time="2025-09-09T23:45:20.632637567Z" level=info msg="CreateContainer within sandbox \"6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fb9baccec2b91d577acfb9aebe9d227c4ff76364e41e0a0e826178516f45a0c7\"" Sep 9 23:45:20.636641 containerd[2031]: time="2025-09-09T23:45:20.635411787Z" level=info msg="StartContainer for \"fb9baccec2b91d577acfb9aebe9d227c4ff76364e41e0a0e826178516f45a0c7\"" Sep 9 23:45:20.650142 containerd[2031]: time="2025-09-09T23:45:20.650070363Z" level=info msg="connecting to shim fb9baccec2b91d577acfb9aebe9d227c4ff76364e41e0a0e826178516f45a0c7" address="unix:///run/containerd/s/45e446d62f573931d9fc581ba50b41419c9c541156fcc9e0279c18cf2bf44a7c" protocol=ttrpc version=3 Sep 9 23:45:20.710714 systemd-networkd[1882]: cali4265113930a: Link UP Sep 9 23:45:20.715801 systemd-networkd[1882]: cali4265113930a: Gained carrier Sep 9 23:45:20.730489 containerd[2031]: time="2025-09-09T23:45:20.729580767Z" level=info msg="connecting to shim 2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300" address="unix:///run/containerd/s/acafa1ccc6746446f0bf50bbd0edc3ce5b6afa282fff94ce437c86e593682b22" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:20.775526 containerd[2031]: time="2025-09-09T23:45:20.775376427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c9d778fc9-7r6b7,Uid:d09b54b6-c901-408b-b515-b6aeff15c64c,Namespace:calico-system,Attempt:0,}" Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.016 [INFO][5153] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--206-k8s-coredns--674b8bbfcf--h2h7d-eth0 coredns-674b8bbfcf- kube-system 385d8235-6e7a-4851-9a04-a461b1d648de 847 0 2025-09-09 23:44:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-206 coredns-674b8bbfcf-h2h7d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4265113930a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2h7d" WorkloadEndpoint="ip--172--31--26--206-k8s-coredns--674b8bbfcf--h2h7d-" Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.016 [INFO][5153] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2h7d" WorkloadEndpoint="ip--172--31--26--206-k8s-coredns--674b8bbfcf--h2h7d-eth0" Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.388 [INFO][5191] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" HandleID="k8s-pod-network.e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" Workload="ip--172--31--26--206-k8s-coredns--674b8bbfcf--h2h7d-eth0" Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.404 [INFO][5191] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" HandleID="k8s-pod-network.e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" Workload="ip--172--31--26--206-k8s-coredns--674b8bbfcf--h2h7d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c4e60), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-206", "pod":"coredns-674b8bbfcf-h2h7d", "timestamp":"2025-09-09 23:45:20.373966573 +0000 UTC"}, Hostname:"ip-172-31-26-206", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.406 [INFO][5191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.471 [INFO][5191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.471 [INFO][5191] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-206' Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.500 [INFO][5191] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" host="ip-172-31-26-206" Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.534 [INFO][5191] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-206" Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.553 [INFO][5191] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.560 [INFO][5191] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.583 [INFO][5191] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.584 [INFO][5191] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" host="ip-172-31-26-206" Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.593 [INFO][5191] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6 Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.613 [INFO][5191] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" host="ip-172-31-26-206" Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.637 [INFO][5191] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.7/26] block=192.168.8.0/26 handle="k8s-pod-network.e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" host="ip-172-31-26-206" Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.638 [INFO][5191] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.7/26] handle="k8s-pod-network.e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" host="ip-172-31-26-206" Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.638 [INFO][5191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:45:20.844817 containerd[2031]: 2025-09-09 23:45:20.638 [INFO][5191] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.7/26] IPv6=[] ContainerID="e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" HandleID="k8s-pod-network.e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" Workload="ip--172--31--26--206-k8s-coredns--674b8bbfcf--h2h7d-eth0" Sep 9 23:45:20.847304 containerd[2031]: 2025-09-09 23:45:20.661 [INFO][5153] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2h7d" WorkloadEndpoint="ip--172--31--26--206-k8s-coredns--674b8bbfcf--h2h7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--206-k8s-coredns--674b8bbfcf--h2h7d-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"385d8235-6e7a-4851-9a04-a461b1d648de", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-206", ContainerID:"", Pod:"coredns-674b8bbfcf-h2h7d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4265113930a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:20.847304 containerd[2031]: 2025-09-09 23:45:20.661 [INFO][5153] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.7/32] ContainerID="e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2h7d" WorkloadEndpoint="ip--172--31--26--206-k8s-coredns--674b8bbfcf--h2h7d-eth0" Sep 9 23:45:20.847304 containerd[2031]: 2025-09-09 23:45:20.661 [INFO][5153] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4265113930a ContainerID="e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2h7d" WorkloadEndpoint="ip--172--31--26--206-k8s-coredns--674b8bbfcf--h2h7d-eth0" Sep 9 23:45:20.847304 containerd[2031]: 2025-09-09 23:45:20.727 [INFO][5153] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2h7d" WorkloadEndpoint="ip--172--31--26--206-k8s-coredns--674b8bbfcf--h2h7d-eth0" Sep 9 23:45:20.847304 containerd[2031]: 2025-09-09 23:45:20.734 [INFO][5153] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2h7d" WorkloadEndpoint="ip--172--31--26--206-k8s-coredns--674b8bbfcf--h2h7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--206-k8s-coredns--674b8bbfcf--h2h7d-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"385d8235-6e7a-4851-9a04-a461b1d648de", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-206", ContainerID:"e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6", Pod:"coredns-674b8bbfcf-h2h7d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4265113930a", MAC:"32:4d:ec:ce:00:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:20.847304 containerd[2031]: 2025-09-09 23:45:20.820 [INFO][5153] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2h7d" WorkloadEndpoint="ip--172--31--26--206-k8s-coredns--674b8bbfcf--h2h7d-eth0" Sep 9 23:45:20.871576 systemd[1]: Started cri-containerd-fb9baccec2b91d577acfb9aebe9d227c4ff76364e41e0a0e826178516f45a0c7.scope - libcontainer container fb9baccec2b91d577acfb9aebe9d227c4ff76364e41e0a0e826178516f45a0c7. Sep 9 23:45:20.896626 systemd[1]: Started cri-containerd-2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300.scope - libcontainer container 2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300. Sep 9 23:45:21.007665 containerd[2031]: time="2025-09-09T23:45:21.007603092Z" level=info msg="connecting to shim e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6" address="unix:///run/containerd/s/9577b7d2ae36ec39f188176b0099f649a3be2cb17c761ceb8a446bee629765b6" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:21.081042 systemd[1]: Started cri-containerd-e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6.scope - libcontainer container e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6. Sep 9 23:45:21.219232 containerd[2031]: time="2025-09-09T23:45:21.218797526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2h7d,Uid:385d8235-6e7a-4851-9a04-a461b1d648de,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6\"" Sep 9 23:45:21.241248 containerd[2031]: time="2025-09-09T23:45:21.241163114Z" level=info msg="CreateContainer within sandbox \"e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:45:21.281000 containerd[2031]: time="2025-09-09T23:45:21.279917294Z" level=info msg="Container 34d34df638e42a7f86300a1998ae6c199e662e30ec096404fa16d50983a5213f: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:21.293524 containerd[2031]: time="2025-09-09T23:45:21.293469098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcccfb84d-vl5gd,Uid:23f2cb82-35f9-49b6-b5c8-108004fea67a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300\"" Sep 9 23:45:21.308014 containerd[2031]: time="2025-09-09T23:45:21.307956770Z" level=info msg="CreateContainer within sandbox \"e8779d7aa35cd348653e2a96f77b05b6fc0bd101e5995c7cdd09411647d6e4c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"34d34df638e42a7f86300a1998ae6c199e662e30ec096404fa16d50983a5213f\"" Sep 9 23:45:21.315644 containerd[2031]: time="2025-09-09T23:45:21.315594110Z" level=info msg="StartContainer for \"34d34df638e42a7f86300a1998ae6c199e662e30ec096404fa16d50983a5213f\"" Sep 9 23:45:21.320853 containerd[2031]: time="2025-09-09T23:45:21.320793806Z" level=info msg="connecting to shim 34d34df638e42a7f86300a1998ae6c199e662e30ec096404fa16d50983a5213f" address="unix:///run/containerd/s/9577b7d2ae36ec39f188176b0099f649a3be2cb17c761ceb8a446bee629765b6" protocol=ttrpc version=3 Sep 9 23:45:21.426798 containerd[2031]: time="2025-09-09T23:45:21.426487587Z" level=info msg="StartContainer for \"fb9baccec2b91d577acfb9aebe9d227c4ff76364e41e0a0e826178516f45a0c7\" returns successfully" Sep 9 23:45:21.443838 systemd[1]: Started cri-containerd-34d34df638e42a7f86300a1998ae6c199e662e30ec096404fa16d50983a5213f.scope - libcontainer container 34d34df638e42a7f86300a1998ae6c199e662e30ec096404fa16d50983a5213f. Sep 9 23:45:21.481386 systemd-networkd[1882]: cali1599e05412f: Link UP Sep 9 23:45:21.485711 systemd-networkd[1882]: cali1599e05412f: Gained carrier Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.078 [INFO][5272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--206-k8s-calico--kube--controllers--6c9d778fc9--7r6b7-eth0 calico-kube-controllers-6c9d778fc9- calico-system d09b54b6-c901-408b-b515-b6aeff15c64c 845 0 2025-09-09 23:44:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c9d778fc9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-206 calico-kube-controllers-6c9d778fc9-7r6b7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1599e05412f [] [] }} ContainerID="89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" Namespace="calico-system" Pod="calico-kube-controllers-6c9d778fc9-7r6b7" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--kube--controllers--6c9d778fc9--7r6b7-" Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.080 [INFO][5272] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" Namespace="calico-system" Pod="calico-kube-controllers-6c9d778fc9-7r6b7" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--kube--controllers--6c9d778fc9--7r6b7-eth0" Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.244 [INFO][5345] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" HandleID="k8s-pod-network.89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" Workload="ip--172--31--26--206-k8s-calico--kube--controllers--6c9d778fc9--7r6b7-eth0" Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.247 [INFO][5345] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" HandleID="k8s-pod-network.89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" Workload="ip--172--31--26--206-k8s-calico--kube--controllers--6c9d778fc9--7r6b7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004df60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-206", "pod":"calico-kube-controllers-6c9d778fc9-7r6b7", "timestamp":"2025-09-09 23:45:21.243814706 +0000 UTC"}, Hostname:"ip-172-31-26-206", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.247 [INFO][5345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.247 [INFO][5345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.247 [INFO][5345] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-206' Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.292 [INFO][5345] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" host="ip-172-31-26-206" Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.319 [INFO][5345] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-206" Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.356 [INFO][5345] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.379 [INFO][5345] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.398 [INFO][5345] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ip-172-31-26-206" Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.398 [INFO][5345] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" host="ip-172-31-26-206" Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.406 [INFO][5345] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08 Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.433 [INFO][5345] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" host="ip-172-31-26-206" Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.459 [INFO][5345] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.8/26] block=192.168.8.0/26 handle="k8s-pod-network.89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" host="ip-172-31-26-206" Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.460 [INFO][5345] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.8/26] handle="k8s-pod-network.89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" host="ip-172-31-26-206" Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.460 [INFO][5345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:45:21.538242 containerd[2031]: 2025-09-09 23:45:21.460 [INFO][5345] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.8/26] IPv6=[] ContainerID="89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" HandleID="k8s-pod-network.89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" Workload="ip--172--31--26--206-k8s-calico--kube--controllers--6c9d778fc9--7r6b7-eth0" Sep 9 23:45:21.542654 containerd[2031]: 2025-09-09 23:45:21.467 [INFO][5272] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" Namespace="calico-system" Pod="calico-kube-controllers-6c9d778fc9-7r6b7" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--kube--controllers--6c9d778fc9--7r6b7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--206-k8s-calico--kube--controllers--6c9d778fc9--7r6b7-eth0", GenerateName:"calico-kube-controllers-6c9d778fc9-", Namespace:"calico-system", SelfLink:"", UID:"d09b54b6-c901-408b-b515-b6aeff15c64c", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c9d778fc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-206", ContainerID:"", Pod:"calico-kube-controllers-6c9d778fc9-7r6b7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1599e05412f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:21.542654 containerd[2031]: 2025-09-09 23:45:21.468 [INFO][5272] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.8/32] ContainerID="89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" Namespace="calico-system" Pod="calico-kube-controllers-6c9d778fc9-7r6b7" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--kube--controllers--6c9d778fc9--7r6b7-eth0" Sep 9 23:45:21.542654 containerd[2031]: 2025-09-09 23:45:21.469 [INFO][5272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1599e05412f ContainerID="89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" Namespace="calico-system" Pod="calico-kube-controllers-6c9d778fc9-7r6b7" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--kube--controllers--6c9d778fc9--7r6b7-eth0" Sep 9 23:45:21.542654 containerd[2031]: 2025-09-09 23:45:21.486 [INFO][5272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" Namespace="calico-system" Pod="calico-kube-controllers-6c9d778fc9-7r6b7" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--kube--controllers--6c9d778fc9--7r6b7-eth0" Sep 9 23:45:21.542654 containerd[2031]: 2025-09-09 23:45:21.490 [INFO][5272] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" Namespace="calico-system" Pod="calico-kube-controllers-6c9d778fc9-7r6b7" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--kube--controllers--6c9d778fc9--7r6b7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--206-k8s-calico--kube--controllers--6c9d778fc9--7r6b7-eth0", GenerateName:"calico-kube-controllers-6c9d778fc9-", Namespace:"calico-system", SelfLink:"", UID:"d09b54b6-c901-408b-b515-b6aeff15c64c", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 44, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c9d778fc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-206", ContainerID:"89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08", Pod:"calico-kube-controllers-6c9d778fc9-7r6b7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1599e05412f", MAC:"62:f5:28:97:1a:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:45:21.542654 containerd[2031]: 2025-09-09 23:45:21.528 [INFO][5272] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" Namespace="calico-system" Pod="calico-kube-controllers-6c9d778fc9-7r6b7" WorkloadEndpoint="ip--172--31--26--206-k8s-calico--kube--controllers--6c9d778fc9--7r6b7-eth0" Sep 9 23:45:21.638636 containerd[2031]: time="2025-09-09T23:45:21.638567752Z" level=info msg="StartContainer for \"34d34df638e42a7f86300a1998ae6c199e662e30ec096404fa16d50983a5213f\" returns successfully" Sep 9 23:45:21.684311 containerd[2031]: time="2025-09-09T23:45:21.684026344Z" level=info msg="connecting to shim 89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08" address="unix:///run/containerd/s/e2d8587eec0ab6a5b285a1401796538265b4f66c20956a6c3e5d81c854179bca" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:21.744457 systemd-networkd[1882]: cali031308abafe: Gained IPv6LL Sep 9 23:45:21.774604 systemd[1]: Started cri-containerd-89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08.scope - libcontainer container 89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08. Sep 9 23:45:21.971355 containerd[2031]: time="2025-09-09T23:45:21.970658333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c9d778fc9-7r6b7,Uid:d09b54b6-c901-408b-b515-b6aeff15c64c,Namespace:calico-system,Attempt:0,} returns sandbox id \"89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08\"" Sep 9 23:45:22.065651 systemd-networkd[1882]: vxlan.calico: Gained IPv6LL Sep 9 23:45:22.440730 kubelet[3474]: I0909 23:45:22.440298 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h2h7d" podStartSLOduration=51.440273824 podStartE2EDuration="51.440273824s" podCreationTimestamp="2025-09-09 23:44:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:45:22.438089992 +0000 UTC m=+57.895871329" watchObservedRunningTime="2025-09-09 23:45:22.440273824 +0000 UTC m=+57.898055137" Sep 9 23:45:22.576679 systemd-networkd[1882]: cali4265113930a: Gained IPv6LL Sep 9 23:45:22.641616 systemd-networkd[1882]: cali1599e05412f: Gained IPv6LL Sep 9 23:45:23.830568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1630227057.mount: Deactivated successfully. Sep 9 23:45:24.805539 containerd[2031]: time="2025-09-09T23:45:24.805478995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:24.807749 containerd[2031]: time="2025-09-09T23:45:24.807706759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 9 23:45:24.809998 containerd[2031]: time="2025-09-09T23:45:24.809954719Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:24.816829 containerd[2031]: time="2025-09-09T23:45:24.816766819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:24.818612 containerd[2031]: time="2025-09-09T23:45:24.818564575Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 4.294252245s" Sep 9 23:45:24.818919 containerd[2031]: time="2025-09-09T23:45:24.818768407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 9 23:45:24.820745 containerd[2031]: time="2025-09-09T23:45:24.820700491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 23:45:24.828729 containerd[2031]: time="2025-09-09T23:45:24.828668011Z" level=info msg="CreateContainer within sandbox \"a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 23:45:24.849489 containerd[2031]: time="2025-09-09T23:45:24.849416840Z" level=info msg="Container d05d4ebc560032448e601465388c01af44eb8cf12b9572ae707eb4a97ffb8306: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:24.872784 containerd[2031]: time="2025-09-09T23:45:24.872719424Z" level=info msg="CreateContainer within sandbox \"a4defae08e6591689035d6a7142321b3674bbd35d124f7fe540fb57270ef9df3\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d05d4ebc560032448e601465388c01af44eb8cf12b9572ae707eb4a97ffb8306\"" Sep 9 23:45:24.874919 containerd[2031]: time="2025-09-09T23:45:24.874860272Z" level=info msg="StartContainer for \"d05d4ebc560032448e601465388c01af44eb8cf12b9572ae707eb4a97ffb8306\"" Sep 9 23:45:24.877897 containerd[2031]: time="2025-09-09T23:45:24.877774868Z" level=info msg="connecting to shim d05d4ebc560032448e601465388c01af44eb8cf12b9572ae707eb4a97ffb8306" address="unix:///run/containerd/s/e693d93797706291d47a9d73b054706f70ba83e2e3c2927efe7ac74355063be4" protocol=ttrpc version=3 Sep 9 23:45:24.962571 systemd[1]: Started cri-containerd-d05d4ebc560032448e601465388c01af44eb8cf12b9572ae707eb4a97ffb8306.scope - libcontainer container d05d4ebc560032448e601465388c01af44eb8cf12b9572ae707eb4a97ffb8306. Sep 9 23:45:25.085882 containerd[2031]: time="2025-09-09T23:45:25.085650689Z" level=info msg="StartContainer for \"d05d4ebc560032448e601465388c01af44eb8cf12b9572ae707eb4a97ffb8306\" returns successfully" Sep 9 23:45:25.540312 ntpd[2004]: Listen normally on 8 vxlan.calico 192.168.8.0:123 Sep 9 23:45:25.540445 ntpd[2004]: Listen normally on 9 calic05b132f456 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 9 23:45:25.540864 ntpd[2004]: 9 Sep 23:45:25 ntpd[2004]: Listen normally on 8 vxlan.calico 192.168.8.0:123 Sep 9 23:45:25.540864 ntpd[2004]: 9 Sep 23:45:25 ntpd[2004]: Listen normally on 9 calic05b132f456 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 9 23:45:25.540864 ntpd[2004]: 9 Sep 23:45:25 ntpd[2004]: Listen normally on 10 califfe4564dade [fe80::ecee:eeff:feee:eeee%5]:123 Sep 9 23:45:25.540864 ntpd[2004]: 9 Sep 23:45:25 ntpd[2004]: Listen normally on 11 calibf9a46868d3 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 9 23:45:25.540864 ntpd[2004]: 9 Sep 23:45:25 ntpd[2004]: Listen normally on 12 cali14588315ab1 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 9 23:45:25.540864 ntpd[2004]: 9 Sep 23:45:25 ntpd[2004]: Listen normally on 13 cali2f5f267e35f [fe80::ecee:eeff:feee:eeee%8]:123 Sep 9 23:45:25.540864 ntpd[2004]: 9 Sep 23:45:25 ntpd[2004]: Listen normally on 14 vxlan.calico [fe80::64fd:9dff:fea8:a7a0%9]:123 Sep 9 23:45:25.540864 ntpd[2004]: 9 Sep 23:45:25 ntpd[2004]: Listen normally on 15 cali031308abafe [fe80::ecee:eeff:feee:eeee%10]:123 Sep 9 23:45:25.540522 ntpd[2004]: Listen normally on 10 califfe4564dade [fe80::ecee:eeff:feee:eeee%5]:123 Sep 9 23:45:25.541377 ntpd[2004]: 9 Sep 23:45:25 ntpd[2004]: Listen normally on 16 cali4265113930a [fe80::ecee:eeff:feee:eeee%11]:123 Sep 9 23:45:25.541377 ntpd[2004]: 9 Sep 23:45:25 ntpd[2004]: Listen normally on 17 cali1599e05412f [fe80::ecee:eeff:feee:eeee%14]:123 Sep 9 23:45:25.540595 ntpd[2004]: Listen normally on 11 calibf9a46868d3 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 9 23:45:25.540658 ntpd[2004]: Listen normally on 12 cali14588315ab1 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 9 23:45:25.540723 ntpd[2004]: Listen normally on 13 cali2f5f267e35f [fe80::ecee:eeff:feee:eeee%8]:123 Sep 9 23:45:25.540784 ntpd[2004]: Listen normally on 14 vxlan.calico [fe80::64fd:9dff:fea8:a7a0%9]:123 Sep 9 23:45:25.540845 ntpd[2004]: Listen normally on 15 cali031308abafe [fe80::ecee:eeff:feee:eeee%10]:123 Sep 9 23:45:25.540905 ntpd[2004]: Listen normally on 16 cali4265113930a [fe80::ecee:eeff:feee:eeee%11]:123 Sep 9 23:45:25.540966 ntpd[2004]: Listen normally on 17 cali1599e05412f [fe80::ecee:eeff:feee:eeee%14]:123 Sep 9 23:45:25.744658 containerd[2031]: time="2025-09-09T23:45:25.744538928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d05d4ebc560032448e601465388c01af44eb8cf12b9572ae707eb4a97ffb8306\" id:\"92eac738a84582a38deb9e0793d70173bad21f820edc87d7e3f1081137c13447\" pid:5578 exit_status:1 exited_at:{seconds:1757461525 nanos:743383916}" Sep 9 23:45:26.653209 containerd[2031]: time="2025-09-09T23:45:26.653108133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d05d4ebc560032448e601465388c01af44eb8cf12b9572ae707eb4a97ffb8306\" id:\"dba12a277c655dd628d74c1641070a7ad7d35c9da120eeb889115eccf75b4100\" pid:5602 exit_status:1 exited_at:{seconds:1757461526 nanos:652210892}" Sep 9 23:45:27.616100 containerd[2031]: time="2025-09-09T23:45:27.615990633Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d05d4ebc560032448e601465388c01af44eb8cf12b9572ae707eb4a97ffb8306\" id:\"af13e17f9c5b85e05d688b47417a21a255c0922a60ab930d04838b1ac4dd3873\" pid:5633 exit_status:1 exited_at:{seconds:1757461527 nanos:615553233}" Sep 9 23:45:28.396645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027369191.mount: Deactivated successfully. Sep 9 23:45:28.432405 containerd[2031]: time="2025-09-09T23:45:28.432323445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:28.435308 containerd[2031]: time="2025-09-09T23:45:28.435218181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 9 23:45:28.438570 containerd[2031]: time="2025-09-09T23:45:28.438027405Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:28.444746 containerd[2031]: time="2025-09-09T23:45:28.444664653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:28.447221 containerd[2031]: time="2025-09-09T23:45:28.447171621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 3.626184066s" Sep 9 23:45:28.447407 containerd[2031]: time="2025-09-09T23:45:28.447380157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 9 23:45:28.450593 containerd[2031]: time="2025-09-09T23:45:28.450320841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 23:45:28.459702 containerd[2031]: time="2025-09-09T23:45:28.459646101Z" level=info msg="CreateContainer within sandbox \"4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 23:45:28.481160 containerd[2031]: time="2025-09-09T23:45:28.477484390Z" level=info msg="Container b40eb6b32b4e4d2136f3e50f4826a00abe8d064e9733bf01f867c142e15943f4: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:28.492529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3796156944.mount: Deactivated successfully. Sep 9 23:45:28.510573 containerd[2031]: time="2025-09-09T23:45:28.510500998Z" level=info msg="CreateContainer within sandbox \"4d97da077c6ef6966a92614d5bb45941825aefce605502f7a0d34f9da4f2a9a0\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b40eb6b32b4e4d2136f3e50f4826a00abe8d064e9733bf01f867c142e15943f4\"" Sep 9 23:45:28.511775 containerd[2031]: time="2025-09-09T23:45:28.511633198Z" level=info msg="StartContainer for \"b40eb6b32b4e4d2136f3e50f4826a00abe8d064e9733bf01f867c142e15943f4\"" Sep 9 23:45:28.518402 containerd[2031]: time="2025-09-09T23:45:28.518339554Z" level=info msg="connecting to shim b40eb6b32b4e4d2136f3e50f4826a00abe8d064e9733bf01f867c142e15943f4" address="unix:///run/containerd/s/fafa01ad78b814f1d6be7bab528861ac77d21f0e4550df2eee5b7e92709d9417" protocol=ttrpc version=3 Sep 9 23:45:28.589617 systemd[1]: Started cri-containerd-b40eb6b32b4e4d2136f3e50f4826a00abe8d064e9733bf01f867c142e15943f4.scope - libcontainer container b40eb6b32b4e4d2136f3e50f4826a00abe8d064e9733bf01f867c142e15943f4. Sep 9 23:45:28.796943 containerd[2031]: time="2025-09-09T23:45:28.796898603Z" level=info msg="StartContainer for \"b40eb6b32b4e4d2136f3e50f4826a00abe8d064e9733bf01f867c142e15943f4\" returns successfully" Sep 9 23:45:29.114879 systemd[1]: Started sshd@9-172.31.26.206:22-139.178.89.65:41174.service - OpenSSH per-connection server daemon (139.178.89.65:41174). Sep 9 23:45:29.326587 sshd[5688]: Accepted publickey for core from 139.178.89.65 port 41174 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:29.330718 sshd-session[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:29.343837 systemd-logind[2011]: New session 10 of user core. Sep 9 23:45:29.352424 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 23:45:29.531152 kubelet[3474]: I0909 23:45:29.530540 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-5xd2t" podStartSLOduration=27.377969488 podStartE2EDuration="34.530517203s" podCreationTimestamp="2025-09-09 23:44:55 +0000 UTC" firstStartedPulling="2025-09-09 23:45:17.66781944 +0000 UTC m=+53.125600729" lastFinishedPulling="2025-09-09 23:45:24.820367059 +0000 UTC m=+60.278148444" observedRunningTime="2025-09-09 23:45:25.498443011 +0000 UTC m=+60.956224432" watchObservedRunningTime="2025-09-09 23:45:29.530517203 +0000 UTC m=+64.988298516" Sep 9 23:45:29.685722 sshd[5691]: Connection closed by 139.178.89.65 port 41174 Sep 9 23:45:29.686808 sshd-session[5688]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:29.696092 systemd[1]: sshd@9-172.31.26.206:22-139.178.89.65:41174.service: Deactivated successfully. Sep 9 23:45:29.702071 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 23:45:29.705900 systemd-logind[2011]: Session 10 logged out. Waiting for processes to exit. Sep 9 23:45:29.709369 systemd-logind[2011]: Removed session 10. Sep 9 23:45:33.026417 containerd[2031]: time="2025-09-09T23:45:33.026339592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:33.028546 containerd[2031]: time="2025-09-09T23:45:33.028233816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 9 23:45:33.030096 containerd[2031]: time="2025-09-09T23:45:33.030040548Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:33.034998 containerd[2031]: time="2025-09-09T23:45:33.034941504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:33.036562 containerd[2031]: time="2025-09-09T23:45:33.036377064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 4.585520735s" Sep 9 23:45:33.036562 containerd[2031]: time="2025-09-09T23:45:33.036430560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 23:45:33.038948 containerd[2031]: time="2025-09-09T23:45:33.038615640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 23:45:33.047438 containerd[2031]: time="2025-09-09T23:45:33.047347968Z" level=info msg="CreateContainer within sandbox \"2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 23:45:33.061016 containerd[2031]: time="2025-09-09T23:45:33.059456160Z" level=info msg="Container 9fedd8b120e8b15350bb4af875dffafab1f4b2e5aa61d395afd82a0ab5ef7d8f: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:33.076371 containerd[2031]: time="2025-09-09T23:45:33.076196832Z" level=info msg="CreateContainer within sandbox \"2ea297fe81ed80f24f370824f63ebd3e22e8587a213daed4782f0f587083bb3c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9fedd8b120e8b15350bb4af875dffafab1f4b2e5aa61d395afd82a0ab5ef7d8f\"" Sep 9 23:45:33.077676 containerd[2031]: time="2025-09-09T23:45:33.077596080Z" level=info msg="StartContainer for \"9fedd8b120e8b15350bb4af875dffafab1f4b2e5aa61d395afd82a0ab5ef7d8f\"" Sep 9 23:45:33.080840 containerd[2031]: time="2025-09-09T23:45:33.080787600Z" level=info msg="connecting to shim 9fedd8b120e8b15350bb4af875dffafab1f4b2e5aa61d395afd82a0ab5ef7d8f" address="unix:///run/containerd/s/66e0c0efbe3a67d0c2df491df7ca8f096ce94f0317535a53656f518b46284615" protocol=ttrpc version=3 Sep 9 23:45:33.129467 systemd[1]: Started cri-containerd-9fedd8b120e8b15350bb4af875dffafab1f4b2e5aa61d395afd82a0ab5ef7d8f.scope - libcontainer container 9fedd8b120e8b15350bb4af875dffafab1f4b2e5aa61d395afd82a0ab5ef7d8f. Sep 9 23:45:33.210548 containerd[2031]: time="2025-09-09T23:45:33.210372385Z" level=info msg="StartContainer for \"9fedd8b120e8b15350bb4af875dffafab1f4b2e5aa61d395afd82a0ab5ef7d8f\" returns successfully" Sep 9 23:45:33.363980 containerd[2031]: time="2025-09-09T23:45:33.363809750Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:33.365913 containerd[2031]: time="2025-09-09T23:45:33.365841698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 23:45:33.372066 containerd[2031]: time="2025-09-09T23:45:33.371982134Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 333.30743ms" Sep 9 23:45:33.372215 containerd[2031]: time="2025-09-09T23:45:33.372068426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 23:45:33.377441 containerd[2031]: time="2025-09-09T23:45:33.375329282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 23:45:33.380472 containerd[2031]: time="2025-09-09T23:45:33.380418938Z" level=info msg="CreateContainer within sandbox \"2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 23:45:33.392056 containerd[2031]: time="2025-09-09T23:45:33.391645850Z" level=info msg="Container 472781c1e294e475a3ef30c1e29b2f9a8625a4715aca0a06b1db697ea3904a9a: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:33.419613 containerd[2031]: time="2025-09-09T23:45:33.419539214Z" level=info msg="CreateContainer within sandbox \"2f5db0ffb4de35e8e09e7925bd13e68d14f5b6a0335baa0880284c7f38387300\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"472781c1e294e475a3ef30c1e29b2f9a8625a4715aca0a06b1db697ea3904a9a\"" Sep 9 23:45:33.421055 containerd[2031]: time="2025-09-09T23:45:33.420516338Z" level=info msg="StartContainer for \"472781c1e294e475a3ef30c1e29b2f9a8625a4715aca0a06b1db697ea3904a9a\"" Sep 9 23:45:33.425893 containerd[2031]: time="2025-09-09T23:45:33.425783726Z" level=info msg="connecting to shim 472781c1e294e475a3ef30c1e29b2f9a8625a4715aca0a06b1db697ea3904a9a" address="unix:///run/containerd/s/acafa1ccc6746446f0bf50bbd0edc3ce5b6afa282fff94ce437c86e593682b22" protocol=ttrpc version=3 Sep 9 23:45:33.477448 systemd[1]: Started cri-containerd-472781c1e294e475a3ef30c1e29b2f9a8625a4715aca0a06b1db697ea3904a9a.scope - libcontainer container 472781c1e294e475a3ef30c1e29b2f9a8625a4715aca0a06b1db697ea3904a9a. Sep 9 23:45:33.555178 kubelet[3474]: I0909 23:45:33.553714 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-65ccd9bff6-zsr6d" podStartSLOduration=6.667187848 podStartE2EDuration="18.553693707s" podCreationTimestamp="2025-09-09 23:45:15 +0000 UTC" firstStartedPulling="2025-09-09 23:45:16.56355667 +0000 UTC m=+52.021337959" lastFinishedPulling="2025-09-09 23:45:28.450062517 +0000 UTC m=+63.907843818" observedRunningTime="2025-09-09 23:45:29.532498571 +0000 UTC m=+64.990279896" watchObservedRunningTime="2025-09-09 23:45:33.553693707 +0000 UTC m=+69.011475020" Sep 9 23:45:33.560204 kubelet[3474]: I0909 23:45:33.558485 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fcccfb84d-jkhpk" podStartSLOduration=37.816469731 podStartE2EDuration="51.558460515s" podCreationTimestamp="2025-09-09 23:44:42 +0000 UTC" firstStartedPulling="2025-09-09 23:45:19.295956984 +0000 UTC m=+54.753738285" lastFinishedPulling="2025-09-09 23:45:33.037947756 +0000 UTC m=+68.495729069" observedRunningTime="2025-09-09 23:45:33.557511063 +0000 UTC m=+69.015292364" watchObservedRunningTime="2025-09-09 23:45:33.558460515 +0000 UTC m=+69.016241876" Sep 9 23:45:33.686544 containerd[2031]: time="2025-09-09T23:45:33.686236887Z" level=info msg="StartContainer for \"472781c1e294e475a3ef30c1e29b2f9a8625a4715aca0a06b1db697ea3904a9a\" returns successfully" Sep 9 23:45:34.546170 kubelet[3474]: I0909 23:45:34.545792 3474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:45:34.575309 kubelet[3474]: I0909 23:45:34.575187 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fcccfb84d-vl5gd" podStartSLOduration=40.5056135 podStartE2EDuration="52.574553632s" podCreationTimestamp="2025-09-09 23:44:42 +0000 UTC" firstStartedPulling="2025-09-09 23:45:21.304337606 +0000 UTC m=+56.762118907" lastFinishedPulling="2025-09-09 23:45:33.373277738 +0000 UTC m=+68.831059039" observedRunningTime="2025-09-09 23:45:34.5732053 +0000 UTC m=+70.030986625" watchObservedRunningTime="2025-09-09 23:45:34.574553632 +0000 UTC m=+70.032334969" Sep 9 23:45:34.723819 systemd[1]: Started sshd@10-172.31.26.206:22-139.178.89.65:52182.service - OpenSSH per-connection server daemon (139.178.89.65:52182). Sep 9 23:45:34.968767 sshd[5798]: Accepted publickey for core from 139.178.89.65 port 52182 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:34.976479 sshd-session[5798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:34.999301 systemd-logind[2011]: New session 11 of user core. Sep 9 23:45:35.003494 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 23:45:35.408752 sshd[5805]: Connection closed by 139.178.89.65 port 52182 Sep 9 23:45:35.410275 sshd-session[5798]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:35.420108 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 23:45:35.422640 systemd[1]: sshd@10-172.31.26.206:22-139.178.89.65:52182.service: Deactivated successfully. Sep 9 23:45:35.445474 systemd-logind[2011]: Session 11 logged out. Waiting for processes to exit. Sep 9 23:45:35.453902 systemd-logind[2011]: Removed session 11. Sep 9 23:45:35.589661 containerd[2031]: time="2025-09-09T23:45:35.588451817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:35.592058 containerd[2031]: time="2025-09-09T23:45:35.591980117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 9 23:45:35.595370 containerd[2031]: time="2025-09-09T23:45:35.595195433Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:35.606009 containerd[2031]: time="2025-09-09T23:45:35.605363585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:35.607056 containerd[2031]: time="2025-09-09T23:45:35.606971621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 2.231568863s" Sep 9 23:45:35.608241 containerd[2031]: time="2025-09-09T23:45:35.607074941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 9 23:45:35.615426 containerd[2031]: time="2025-09-09T23:45:35.615366581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 23:45:35.625735 containerd[2031]: time="2025-09-09T23:45:35.625625813Z" level=info msg="CreateContainer within sandbox \"6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 23:45:35.658668 containerd[2031]: time="2025-09-09T23:45:35.658378889Z" level=info msg="Container e740302dc413a68a3f1f9866564ec33f6da8f4e149e14ca10403dbd4eb0c8e2b: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:35.670790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3316102164.mount: Deactivated successfully. Sep 9 23:45:35.700947 containerd[2031]: time="2025-09-09T23:45:35.700871789Z" level=info msg="CreateContainer within sandbox \"6070ec9672ef5eb034608e527a575022e14abcbbb9d5916a58a0ce04edbee9cc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e740302dc413a68a3f1f9866564ec33f6da8f4e149e14ca10403dbd4eb0c8e2b\"" Sep 9 23:45:35.703559 containerd[2031]: time="2025-09-09T23:45:35.703488341Z" level=info msg="StartContainer for \"e740302dc413a68a3f1f9866564ec33f6da8f4e149e14ca10403dbd4eb0c8e2b\"" Sep 9 23:45:35.712685 containerd[2031]: time="2025-09-09T23:45:35.712612902Z" level=info msg="connecting to shim e740302dc413a68a3f1f9866564ec33f6da8f4e149e14ca10403dbd4eb0c8e2b" address="unix:///run/containerd/s/45e446d62f573931d9fc581ba50b41419c9c541156fcc9e0279c18cf2bf44a7c" protocol=ttrpc version=3 Sep 9 23:45:35.819501 systemd[1]: Started cri-containerd-e740302dc413a68a3f1f9866564ec33f6da8f4e149e14ca10403dbd4eb0c8e2b.scope - libcontainer container e740302dc413a68a3f1f9866564ec33f6da8f4e149e14ca10403dbd4eb0c8e2b. Sep 9 23:45:35.972227 containerd[2031]: time="2025-09-09T23:45:35.971917651Z" level=info msg="StartContainer for \"e740302dc413a68a3f1f9866564ec33f6da8f4e149e14ca10403dbd4eb0c8e2b\" returns successfully" Sep 9 23:45:36.642420 kubelet[3474]: I0909 23:45:36.642303 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7cs7p" podStartSLOduration=24.524841984 podStartE2EDuration="42.642275298s" podCreationTimestamp="2025-09-09 23:44:54 +0000 UTC" firstStartedPulling="2025-09-09 23:45:17.495989219 +0000 UTC m=+52.953770508" lastFinishedPulling="2025-09-09 23:45:35.613422521 +0000 UTC m=+71.071203822" observedRunningTime="2025-09-09 23:45:36.638326218 +0000 UTC m=+72.096107651" watchObservedRunningTime="2025-09-09 23:45:36.642275298 +0000 UTC m=+72.100056599" Sep 9 23:45:36.985518 kubelet[3474]: I0909 23:45:36.983630 3474 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 23:45:36.985518 kubelet[3474]: I0909 23:45:36.983701 3474 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 23:45:39.746899 containerd[2031]: time="2025-09-09T23:45:39.746721706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:39.750483 containerd[2031]: time="2025-09-09T23:45:39.750405574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 9 23:45:39.753588 containerd[2031]: time="2025-09-09T23:45:39.753500254Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:39.762225 containerd[2031]: time="2025-09-09T23:45:39.761979010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:39.765426 containerd[2031]: time="2025-09-09T23:45:39.765195022Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 4.149026217s" Sep 9 23:45:39.765902 containerd[2031]: time="2025-09-09T23:45:39.765383278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 9 23:45:39.802092 containerd[2031]: time="2025-09-09T23:45:39.802005334Z" level=info msg="CreateContainer within sandbox \"89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 23:45:39.824758 containerd[2031]: time="2025-09-09T23:45:39.823639474Z" level=info msg="Container 4eb532d29dd1ac570b2cfb137eb9e349e8b63be741ba464ac996309043a03fac: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:39.848998 containerd[2031]: time="2025-09-09T23:45:39.848704606Z" level=info msg="CreateContainer within sandbox \"89146346add00074924ccc627dcd163bf7adc769d1bdc799811b24d5c5a09b08\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4eb532d29dd1ac570b2cfb137eb9e349e8b63be741ba464ac996309043a03fac\"" Sep 9 23:45:39.850885 containerd[2031]: time="2025-09-09T23:45:39.850840870Z" level=info msg="StartContainer for \"4eb532d29dd1ac570b2cfb137eb9e349e8b63be741ba464ac996309043a03fac\"" Sep 9 23:45:39.855405 containerd[2031]: time="2025-09-09T23:45:39.855349858Z" level=info msg="connecting to shim 4eb532d29dd1ac570b2cfb137eb9e349e8b63be741ba464ac996309043a03fac" address="unix:///run/containerd/s/e2d8587eec0ab6a5b285a1401796538265b4f66c20956a6c3e5d81c854179bca" protocol=ttrpc version=3 Sep 9 23:45:39.918444 systemd[1]: Started cri-containerd-4eb532d29dd1ac570b2cfb137eb9e349e8b63be741ba464ac996309043a03fac.scope - libcontainer container 4eb532d29dd1ac570b2cfb137eb9e349e8b63be741ba464ac996309043a03fac. Sep 9 23:45:40.086932 containerd[2031]: time="2025-09-09T23:45:40.085399315Z" level=info msg="StartContainer for \"4eb532d29dd1ac570b2cfb137eb9e349e8b63be741ba464ac996309043a03fac\" returns successfully" Sep 9 23:45:40.450915 systemd[1]: Started sshd@11-172.31.26.206:22-139.178.89.65:44198.service - OpenSSH per-connection server daemon (139.178.89.65:44198). Sep 9 23:45:40.668362 sshd[5905]: Accepted publickey for core from 139.178.89.65 port 44198 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:40.676712 sshd-session[5905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:40.692136 systemd-logind[2011]: New session 12 of user core. Sep 9 23:45:40.702757 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 23:45:40.746150 containerd[2031]: time="2025-09-09T23:45:40.746058251Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4eb532d29dd1ac570b2cfb137eb9e349e8b63be741ba464ac996309043a03fac\" id:\"9714b82166a7f9e83001358ff342fd3cc85cab9dd1d86fc6cf20fd00540f490d\" pid:5920 exited_at:{seconds:1757461540 nanos:745598843}" Sep 9 23:45:40.771152 kubelet[3474]: I0909 23:45:40.769663 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c9d778fc9-7r6b7" podStartSLOduration=27.978076542 podStartE2EDuration="45.769638275s" podCreationTimestamp="2025-09-09 23:44:55 +0000 UTC" firstStartedPulling="2025-09-09 23:45:21.977009273 +0000 UTC m=+57.434790574" lastFinishedPulling="2025-09-09 23:45:39.768571006 +0000 UTC m=+75.226352307" observedRunningTime="2025-09-09 23:45:40.666395998 +0000 UTC m=+76.124177359" watchObservedRunningTime="2025-09-09 23:45:40.769638275 +0000 UTC m=+76.227419576" Sep 9 23:45:40.965748 sshd[5927]: Connection closed by 139.178.89.65 port 44198 Sep 9 23:45:40.966792 sshd-session[5905]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:40.974247 systemd-logind[2011]: Session 12 logged out. Waiting for processes to exit. Sep 9 23:45:40.975879 systemd[1]: sshd@11-172.31.26.206:22-139.178.89.65:44198.service: Deactivated successfully. Sep 9 23:45:40.981637 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 23:45:40.986031 systemd-logind[2011]: Removed session 12. Sep 9 23:45:41.000272 systemd[1]: Started sshd@12-172.31.26.206:22-139.178.89.65:44204.service - OpenSSH per-connection server daemon (139.178.89.65:44204). Sep 9 23:45:41.207208 sshd[5943]: Accepted publickey for core from 139.178.89.65 port 44204 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:41.209979 sshd-session[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:41.217808 systemd-logind[2011]: New session 13 of user core. Sep 9 23:45:41.225436 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 23:45:41.557384 sshd[5946]: Connection closed by 139.178.89.65 port 44204 Sep 9 23:45:41.558680 sshd-session[5943]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:41.570985 systemd[1]: sshd@12-172.31.26.206:22-139.178.89.65:44204.service: Deactivated successfully. Sep 9 23:45:41.578224 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 23:45:41.580617 systemd-logind[2011]: Session 13 logged out. Waiting for processes to exit. Sep 9 23:45:41.607619 systemd[1]: Started sshd@13-172.31.26.206:22-139.178.89.65:44218.service - OpenSSH per-connection server daemon (139.178.89.65:44218). Sep 9 23:45:41.609056 systemd-logind[2011]: Removed session 13. Sep 9 23:45:41.818186 sshd[5957]: Accepted publickey for core from 139.178.89.65 port 44218 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:41.820529 sshd-session[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:41.829235 systemd-logind[2011]: New session 14 of user core. Sep 9 23:45:41.838795 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 23:45:42.082730 sshd[5960]: Connection closed by 139.178.89.65 port 44218 Sep 9 23:45:42.082076 sshd-session[5957]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:42.089719 systemd[1]: sshd@13-172.31.26.206:22-139.178.89.65:44218.service: Deactivated successfully. Sep 9 23:45:42.095748 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 23:45:42.098217 systemd-logind[2011]: Session 14 logged out. Waiting for processes to exit. Sep 9 23:45:42.102672 systemd-logind[2011]: Removed session 14. Sep 9 23:45:46.283896 containerd[2031]: time="2025-09-09T23:45:46.283831910Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52a8ddb29f2b8011356b1cc49471bf287ee1724a59b958d7e7d7ad21df75b3e3\" id:\"a29612d1d4e39e2273b273cf8d8e4b2bd75d4a5ee64337665f1ecbeed25e60a2\" pid:5993 exited_at:{seconds:1757461546 nanos:283267346}" Sep 9 23:45:47.128397 systemd[1]: Started sshd@14-172.31.26.206:22-139.178.89.65:44228.service - OpenSSH per-connection server daemon (139.178.89.65:44228). Sep 9 23:45:47.343073 sshd[6009]: Accepted publickey for core from 139.178.89.65 port 44228 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:47.344341 sshd-session[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:47.352313 systemd-logind[2011]: New session 15 of user core. Sep 9 23:45:47.359389 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 23:45:47.626554 sshd[6014]: Connection closed by 139.178.89.65 port 44228 Sep 9 23:45:47.627883 sshd-session[6009]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:47.635467 systemd[1]: sshd@14-172.31.26.206:22-139.178.89.65:44228.service: Deactivated successfully. Sep 9 23:45:47.639062 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 23:45:47.642222 systemd-logind[2011]: Session 15 logged out. Waiting for processes to exit. Sep 9 23:45:47.646665 systemd-logind[2011]: Removed session 15. Sep 9 23:45:52.673675 systemd[1]: Started sshd@15-172.31.26.206:22-139.178.89.65:53298.service - OpenSSH per-connection server daemon (139.178.89.65:53298). Sep 9 23:45:52.879899 sshd[6028]: Accepted publickey for core from 139.178.89.65 port 53298 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:52.882314 sshd-session[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:52.890741 systemd-logind[2011]: New session 16 of user core. Sep 9 23:45:52.902315 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 23:45:53.145708 sshd[6032]: Connection closed by 139.178.89.65 port 53298 Sep 9 23:45:53.146767 sshd-session[6028]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:53.154839 systemd[1]: sshd@15-172.31.26.206:22-139.178.89.65:53298.service: Deactivated successfully. Sep 9 23:45:53.160236 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 23:45:53.163750 systemd-logind[2011]: Session 16 logged out. Waiting for processes to exit. Sep 9 23:45:53.167714 systemd-logind[2011]: Removed session 16. Sep 9 23:45:55.824749 kubelet[3474]: I0909 23:45:55.823582 3474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:45:57.909316 containerd[2031]: time="2025-09-09T23:45:57.909105736Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d05d4ebc560032448e601465388c01af44eb8cf12b9572ae707eb4a97ffb8306\" id:\"a50b63ed2de19c10734146675cc9407b27de27b4b941987808ce0e6d08127bc0\" pid:6058 exited_at:{seconds:1757461557 nanos:906780376}" Sep 9 23:45:58.191931 systemd[1]: Started sshd@16-172.31.26.206:22-139.178.89.65:53306.service - OpenSSH per-connection server daemon (139.178.89.65:53306). Sep 9 23:45:58.196956 containerd[2031]: time="2025-09-09T23:45:58.196475857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d05d4ebc560032448e601465388c01af44eb8cf12b9572ae707eb4a97ffb8306\" id:\"0f8a67f8a9126072c24bfcfc5db30d5f90dca3560411972ac774eaf42c6e4946\" pid:6080 exited_at:{seconds:1757461558 nanos:191049481}" Sep 9 23:45:58.411184 sshd[6094]: Accepted publickey for core from 139.178.89.65 port 53306 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:58.414606 sshd-session[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:58.422468 systemd-logind[2011]: New session 17 of user core. Sep 9 23:45:58.432391 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 23:45:58.691999 sshd[6097]: Connection closed by 139.178.89.65 port 53306 Sep 9 23:45:58.691862 sshd-session[6094]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:58.702424 systemd[1]: sshd@16-172.31.26.206:22-139.178.89.65:53306.service: Deactivated successfully. Sep 9 23:45:58.710356 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 23:45:58.713527 systemd-logind[2011]: Session 17 logged out. Waiting for processes to exit. Sep 9 23:45:58.716814 systemd-logind[2011]: Removed session 17. Sep 9 23:46:03.735342 systemd[1]: Started sshd@17-172.31.26.206:22-139.178.89.65:37450.service - OpenSSH per-connection server daemon (139.178.89.65:37450). Sep 9 23:46:03.937025 sshd[6120]: Accepted publickey for core from 139.178.89.65 port 37450 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:03.939624 sshd-session[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:03.947671 systemd-logind[2011]: New session 18 of user core. Sep 9 23:46:03.959428 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 23:46:04.207734 sshd[6123]: Connection closed by 139.178.89.65 port 37450 Sep 9 23:46:04.208751 sshd-session[6120]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:04.217517 systemd[1]: sshd@17-172.31.26.206:22-139.178.89.65:37450.service: Deactivated successfully. Sep 9 23:46:04.222115 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 23:46:04.225420 systemd-logind[2011]: Session 18 logged out. Waiting for processes to exit. Sep 9 23:46:04.230372 systemd-logind[2011]: Removed session 18. Sep 9 23:46:04.248470 systemd[1]: Started sshd@18-172.31.26.206:22-139.178.89.65:37452.service - OpenSSH per-connection server daemon (139.178.89.65:37452). Sep 9 23:46:04.444113 sshd[6135]: Accepted publickey for core from 139.178.89.65 port 37452 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:04.447085 sshd-session[6135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:04.455296 systemd-logind[2011]: New session 19 of user core. Sep 9 23:46:04.462421 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 23:46:05.082203 sshd[6138]: Connection closed by 139.178.89.65 port 37452 Sep 9 23:46:05.083207 sshd-session[6135]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:05.090353 systemd[1]: sshd@18-172.31.26.206:22-139.178.89.65:37452.service: Deactivated successfully. Sep 9 23:46:05.095788 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 23:46:05.099643 systemd-logind[2011]: Session 19 logged out. Waiting for processes to exit. Sep 9 23:46:05.119444 systemd-logind[2011]: Removed session 19. Sep 9 23:46:05.120028 systemd[1]: Started sshd@19-172.31.26.206:22-139.178.89.65:37466.service - OpenSSH per-connection server daemon (139.178.89.65:37466). Sep 9 23:46:05.322223 sshd[6148]: Accepted publickey for core from 139.178.89.65 port 37466 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:05.324258 sshd-session[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:05.332380 systemd-logind[2011]: New session 20 of user core. Sep 9 23:46:05.340438 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 23:46:06.547166 sshd[6151]: Connection closed by 139.178.89.65 port 37466 Sep 9 23:46:06.546944 sshd-session[6148]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:06.558582 systemd[1]: sshd@19-172.31.26.206:22-139.178.89.65:37466.service: Deactivated successfully. Sep 9 23:46:06.570814 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 23:46:06.574873 systemd-logind[2011]: Session 20 logged out. Waiting for processes to exit. Sep 9 23:46:06.603820 systemd[1]: Started sshd@20-172.31.26.206:22-139.178.89.65:37482.service - OpenSSH per-connection server daemon (139.178.89.65:37482). Sep 9 23:46:06.607827 systemd-logind[2011]: Removed session 20. Sep 9 23:46:06.826401 sshd[6170]: Accepted publickey for core from 139.178.89.65 port 37482 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:06.828425 sshd-session[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:06.836920 systemd-logind[2011]: New session 21 of user core. Sep 9 23:46:06.846412 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 23:46:07.408967 sshd[6174]: Connection closed by 139.178.89.65 port 37482 Sep 9 23:46:07.409648 sshd-session[6170]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:07.421110 systemd[1]: sshd@20-172.31.26.206:22-139.178.89.65:37482.service: Deactivated successfully. Sep 9 23:46:07.430240 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 23:46:07.434508 systemd-logind[2011]: Session 21 logged out. Waiting for processes to exit. Sep 9 23:46:07.453699 systemd[1]: Started sshd@21-172.31.26.206:22-139.178.89.65:37494.service - OpenSSH per-connection server daemon (139.178.89.65:37494). Sep 9 23:46:07.456348 systemd-logind[2011]: Removed session 21. Sep 9 23:46:07.646327 sshd[6184]: Accepted publickey for core from 139.178.89.65 port 37494 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:07.648383 sshd-session[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:07.656427 systemd-logind[2011]: New session 22 of user core. Sep 9 23:46:07.664018 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 23:46:07.917802 sshd[6187]: Connection closed by 139.178.89.65 port 37494 Sep 9 23:46:07.918361 sshd-session[6184]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:07.924867 systemd[1]: sshd@21-172.31.26.206:22-139.178.89.65:37494.service: Deactivated successfully. Sep 9 23:46:07.931995 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 23:46:07.935557 systemd-logind[2011]: Session 22 logged out. Waiting for processes to exit. Sep 9 23:46:07.939397 systemd-logind[2011]: Removed session 22. Sep 9 23:46:10.697772 containerd[2031]: time="2025-09-09T23:46:10.697706991Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4eb532d29dd1ac570b2cfb137eb9e349e8b63be741ba464ac996309043a03fac\" id:\"c07fb91c4fc34f49624e3d9b6d9d1e42151ac313def760a97f7cfbea8f4596a6\" pid:6209 exited_at:{seconds:1757461570 nanos:696754203}" Sep 9 23:46:12.953850 systemd[1]: Started sshd@22-172.31.26.206:22-139.178.89.65:47706.service - OpenSSH per-connection server daemon (139.178.89.65:47706). Sep 9 23:46:13.154306 sshd[6219]: Accepted publickey for core from 139.178.89.65 port 47706 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:13.156756 sshd-session[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:13.165291 systemd-logind[2011]: New session 23 of user core. Sep 9 23:46:13.171011 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 23:46:13.421938 sshd[6222]: Connection closed by 139.178.89.65 port 47706 Sep 9 23:46:13.422447 sshd-session[6219]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:13.429583 systemd-logind[2011]: Session 23 logged out. Waiting for processes to exit. Sep 9 23:46:13.431529 systemd[1]: sshd@22-172.31.26.206:22-139.178.89.65:47706.service: Deactivated successfully. Sep 9 23:46:13.435999 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 23:46:13.441651 systemd-logind[2011]: Removed session 23. Sep 9 23:46:16.463013 containerd[2031]: time="2025-09-09T23:46:16.462941780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52a8ddb29f2b8011356b1cc49471bf287ee1724a59b958d7e7d7ad21df75b3e3\" id:\"208b2f99e793b12d7dc5f6b99b2e6a38ef91b054c38bc593486e1437a628ada2\" pid:6248 exited_at:{seconds:1757461576 nanos:462471344}" Sep 9 23:46:18.459003 systemd[1]: Started sshd@23-172.31.26.206:22-139.178.89.65:47714.service - OpenSSH per-connection server daemon (139.178.89.65:47714). Sep 9 23:46:18.660047 sshd[6260]: Accepted publickey for core from 139.178.89.65 port 47714 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:18.663262 sshd-session[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:18.674308 systemd-logind[2011]: New session 24 of user core. Sep 9 23:46:18.684433 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 23:46:18.944965 sshd[6263]: Connection closed by 139.178.89.65 port 47714 Sep 9 23:46:18.945837 sshd-session[6260]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:18.953879 systemd[1]: sshd@23-172.31.26.206:22-139.178.89.65:47714.service: Deactivated successfully. Sep 9 23:46:18.957670 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 23:46:18.961468 systemd-logind[2011]: Session 24 logged out. Waiting for processes to exit. Sep 9 23:46:18.964540 systemd-logind[2011]: Removed session 24. Sep 9 23:46:23.990895 systemd[1]: Started sshd@24-172.31.26.206:22-139.178.89.65:42610.service - OpenSSH per-connection server daemon (139.178.89.65:42610). Sep 9 23:46:24.200242 sshd[6275]: Accepted publickey for core from 139.178.89.65 port 42610 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:24.202631 sshd-session[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:24.215371 systemd-logind[2011]: New session 25 of user core. Sep 9 23:46:24.224664 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 23:46:24.522148 sshd[6278]: Connection closed by 139.178.89.65 port 42610 Sep 9 23:46:24.522950 sshd-session[6275]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:24.534394 systemd[1]: sshd@24-172.31.26.206:22-139.178.89.65:42610.service: Deactivated successfully. Sep 9 23:46:24.542729 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 23:46:24.547386 systemd-logind[2011]: Session 25 logged out. Waiting for processes to exit. Sep 9 23:46:24.552897 systemd-logind[2011]: Removed session 25. Sep 9 23:46:27.992243 containerd[2031]: time="2025-09-09T23:46:27.992044377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d05d4ebc560032448e601465388c01af44eb8cf12b9572ae707eb4a97ffb8306\" id:\"b875519bcc456904f391910f016fb262f6e5684610d9fd88bd6909155e708650\" pid:6304 exited_at:{seconds:1757461587 nanos:990094173}" Sep 9 23:46:29.560916 systemd[1]: Started sshd@25-172.31.26.206:22-139.178.89.65:42624.service - OpenSSH per-connection server daemon (139.178.89.65:42624). Sep 9 23:46:29.787181 sshd[6315]: Accepted publickey for core from 139.178.89.65 port 42624 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:29.792902 sshd-session[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:29.805222 systemd-logind[2011]: New session 26 of user core. Sep 9 23:46:29.815756 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 23:46:30.134540 sshd[6318]: Connection closed by 139.178.89.65 port 42624 Sep 9 23:46:30.134945 sshd-session[6315]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:30.144819 systemd[1]: sshd@25-172.31.26.206:22-139.178.89.65:42624.service: Deactivated successfully. Sep 9 23:46:30.153219 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 23:46:30.158184 systemd-logind[2011]: Session 26 logged out. Waiting for processes to exit. Sep 9 23:46:30.161314 systemd-logind[2011]: Removed session 26. Sep 9 23:46:35.171636 systemd[1]: Started sshd@26-172.31.26.206:22-139.178.89.65:41092.service - OpenSSH per-connection server daemon (139.178.89.65:41092). Sep 9 23:46:35.377372 sshd[6332]: Accepted publickey for core from 139.178.89.65 port 41092 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:35.380550 sshd-session[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:35.393214 systemd-logind[2011]: New session 27 of user core. Sep 9 23:46:35.398468 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 23:46:35.677681 sshd[6335]: Connection closed by 139.178.89.65 port 41092 Sep 9 23:46:35.679324 sshd-session[6332]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:35.689785 systemd[1]: sshd@26-172.31.26.206:22-139.178.89.65:41092.service: Deactivated successfully. Sep 9 23:46:35.697817 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 23:46:35.702924 systemd-logind[2011]: Session 27 logged out. Waiting for processes to exit. Sep 9 23:46:35.706342 systemd-logind[2011]: Removed session 27. Sep 9 23:46:36.078796 containerd[2031]: time="2025-09-09T23:46:36.078539281Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4eb532d29dd1ac570b2cfb137eb9e349e8b63be741ba464ac996309043a03fac\" id:\"a7127fea63081dbca5451f91643747af34358da9b1d7c3a902921ecabd62abb7\" pid:6359 exited_at:{seconds:1757461596 nanos:78078349}" Sep 9 23:46:40.723768 systemd[1]: Started sshd@27-172.31.26.206:22-139.178.89.65:39124.service - OpenSSH per-connection server daemon (139.178.89.65:39124). Sep 9 23:46:40.737372 containerd[2031]: time="2025-09-09T23:46:40.734967932Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4eb532d29dd1ac570b2cfb137eb9e349e8b63be741ba464ac996309043a03fac\" id:\"29003ef471d39bb457e3e1ff090232f383e538ad9289e178ae8a0d66e74229cb\" pid:6382 exited_at:{seconds:1757461600 nanos:733930976}" Sep 9 23:46:40.941152 sshd[6389]: Accepted publickey for core from 139.178.89.65 port 39124 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:40.943733 sshd-session[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:40.955821 systemd-logind[2011]: New session 28 of user core. Sep 9 23:46:40.963524 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 23:46:41.251252 sshd[6395]: Connection closed by 139.178.89.65 port 39124 Sep 9 23:46:41.252605 sshd-session[6389]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:41.263791 systemd[1]: sshd@27-172.31.26.206:22-139.178.89.65:39124.service: Deactivated successfully. Sep 9 23:46:41.269690 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 23:46:41.276223 systemd-logind[2011]: Session 28 logged out. Waiting for processes to exit. Sep 9 23:46:41.281708 systemd-logind[2011]: Removed session 28. Sep 9 23:46:46.278030 containerd[2031]: time="2025-09-09T23:46:46.277874208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52a8ddb29f2b8011356b1cc49471bf287ee1724a59b958d7e7d7ad21df75b3e3\" id:\"1a3d7549e29ec5560fdacccf6b9465a0c953e54bf825b2d6cd170b3879288c2f\" pid:6426 exited_at:{seconds:1757461606 nanos:277510404}" Sep 9 23:46:54.829909 systemd[1]: cri-containerd-5af9df629276f9bab8b9d9c9ad5413624616e9296759bbbca635c08f28c9a764.scope: Deactivated successfully. Sep 9 23:46:54.831818 systemd[1]: cri-containerd-5af9df629276f9bab8b9d9c9ad5413624616e9296759bbbca635c08f28c9a764.scope: Consumed 24.627s CPU time, 107.5M memory peak, 352K read from disk. Sep 9 23:46:54.838773 containerd[2031]: time="2025-09-09T23:46:54.838702919Z" level=info msg="received exit event container_id:\"5af9df629276f9bab8b9d9c9ad5413624616e9296759bbbca635c08f28c9a764\" id:\"5af9df629276f9bab8b9d9c9ad5413624616e9296759bbbca635c08f28c9a764\" pid:3799 exit_status:1 exited_at:{seconds:1757461614 nanos:838104299}" Sep 9 23:46:54.839335 containerd[2031]: time="2025-09-09T23:46:54.838922615Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5af9df629276f9bab8b9d9c9ad5413624616e9296759bbbca635c08f28c9a764\" id:\"5af9df629276f9bab8b9d9c9ad5413624616e9296759bbbca635c08f28c9a764\" pid:3799 exit_status:1 exited_at:{seconds:1757461614 nanos:838104299}" Sep 9 23:46:54.881086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5af9df629276f9bab8b9d9c9ad5413624616e9296759bbbca635c08f28c9a764-rootfs.mount: Deactivated successfully. Sep 9 23:46:54.928837 kubelet[3474]: I0909 23:46:54.928636 3474 scope.go:117] "RemoveContainer" containerID="5af9df629276f9bab8b9d9c9ad5413624616e9296759bbbca635c08f28c9a764" Sep 9 23:46:54.948256 containerd[2031]: time="2025-09-09T23:46:54.947787275Z" level=info msg="CreateContainer within sandbox \"f3581bfa89603ba00e366c3effc0d5359f7b76e44e9d630f0883c2f548bd426c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 9 23:46:54.967157 containerd[2031]: time="2025-09-09T23:46:54.966147827Z" level=info msg="Container 5289146a9fd15b99b88178f4f1d090dd05dc4c8dbd3ffbc853fa5ceea6e63501: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:46:54.984108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1493544916.mount: Deactivated successfully. Sep 9 23:46:55.003785 containerd[2031]: time="2025-09-09T23:46:55.003727915Z" level=info msg="CreateContainer within sandbox \"f3581bfa89603ba00e366c3effc0d5359f7b76e44e9d630f0883c2f548bd426c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"5289146a9fd15b99b88178f4f1d090dd05dc4c8dbd3ffbc853fa5ceea6e63501\"" Sep 9 23:46:55.005427 containerd[2031]: time="2025-09-09T23:46:55.005374771Z" level=info msg="StartContainer for \"5289146a9fd15b99b88178f4f1d090dd05dc4c8dbd3ffbc853fa5ceea6e63501\"" Sep 9 23:46:55.007812 containerd[2031]: time="2025-09-09T23:46:55.007760695Z" level=info msg="connecting to shim 5289146a9fd15b99b88178f4f1d090dd05dc4c8dbd3ffbc853fa5ceea6e63501" address="unix:///run/containerd/s/41c0f7f67f35a733d4fdfdaf80d74cd42d2a42e4f2e8c67774fefec7c7ff7fbc" protocol=ttrpc version=3 Sep 9 23:46:55.059928 systemd[1]: Started cri-containerd-5289146a9fd15b99b88178f4f1d090dd05dc4c8dbd3ffbc853fa5ceea6e63501.scope - libcontainer container 5289146a9fd15b99b88178f4f1d090dd05dc4c8dbd3ffbc853fa5ceea6e63501. Sep 9 23:46:55.119985 containerd[2031]: time="2025-09-09T23:46:55.119223728Z" level=info msg="StartContainer for \"5289146a9fd15b99b88178f4f1d090dd05dc4c8dbd3ffbc853fa5ceea6e63501\" returns successfully" Sep 9 23:46:55.914703 systemd[1]: cri-containerd-e8e6ed95d727f637816a85a755fba5c421bd2435511c1e3ca11d975d3a237d37.scope: Deactivated successfully. Sep 9 23:46:55.916664 systemd[1]: cri-containerd-e8e6ed95d727f637816a85a755fba5c421bd2435511c1e3ca11d975d3a237d37.scope: Consumed 6.336s CPU time, 61M memory peak, 64K read from disk. Sep 9 23:46:55.922662 containerd[2031]: time="2025-09-09T23:46:55.922507344Z" level=info msg="received exit event container_id:\"e8e6ed95d727f637816a85a755fba5c421bd2435511c1e3ca11d975d3a237d37\" id:\"e8e6ed95d727f637816a85a755fba5c421bd2435511c1e3ca11d975d3a237d37\" pid:3215 exit_status:1 exited_at:{seconds:1757461615 nanos:921583044}" Sep 9 23:46:55.923889 containerd[2031]: time="2025-09-09T23:46:55.922632444Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8e6ed95d727f637816a85a755fba5c421bd2435511c1e3ca11d975d3a237d37\" id:\"e8e6ed95d727f637816a85a755fba5c421bd2435511c1e3ca11d975d3a237d37\" pid:3215 exit_status:1 exited_at:{seconds:1757461615 nanos:921583044}" Sep 9 23:46:55.987169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8e6ed95d727f637816a85a755fba5c421bd2435511c1e3ca11d975d3a237d37-rootfs.mount: Deactivated successfully. Sep 9 23:46:56.952589 kubelet[3474]: I0909 23:46:56.952545 3474 scope.go:117] "RemoveContainer" containerID="e8e6ed95d727f637816a85a755fba5c421bd2435511c1e3ca11d975d3a237d37" Sep 9 23:46:56.957601 containerd[2031]: time="2025-09-09T23:46:56.957202297Z" level=info msg="CreateContainer within sandbox \"8f1844c9be0a608f5719218c25de1e2d73c19615b5d0502d8160841b1db7c976\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 9 23:46:56.977473 containerd[2031]: time="2025-09-09T23:46:56.977417221Z" level=info msg="Container 095d87773bb89607c5b9702a1578c80abed5d7e2d0b315871ebed8f12e2c4e91: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:46:56.997427 containerd[2031]: time="2025-09-09T23:46:56.997359529Z" level=info msg="CreateContainer within sandbox \"8f1844c9be0a608f5719218c25de1e2d73c19615b5d0502d8160841b1db7c976\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"095d87773bb89607c5b9702a1578c80abed5d7e2d0b315871ebed8f12e2c4e91\"" Sep 9 23:46:56.998293 containerd[2031]: time="2025-09-09T23:46:56.998255353Z" level=info msg="StartContainer for \"095d87773bb89607c5b9702a1578c80abed5d7e2d0b315871ebed8f12e2c4e91\"" Sep 9 23:46:57.000760 containerd[2031]: time="2025-09-09T23:46:57.000712833Z" level=info msg="connecting to shim 095d87773bb89607c5b9702a1578c80abed5d7e2d0b315871ebed8f12e2c4e91" address="unix:///run/containerd/s/999e16206879968bff1b44f5de1b581a40dc058e2672477452108e720e6fb276" protocol=ttrpc version=3 Sep 9 23:46:57.049427 systemd[1]: Started cri-containerd-095d87773bb89607c5b9702a1578c80abed5d7e2d0b315871ebed8f12e2c4e91.scope - libcontainer container 095d87773bb89607c5b9702a1578c80abed5d7e2d0b315871ebed8f12e2c4e91. Sep 9 23:46:57.133956 containerd[2031]: time="2025-09-09T23:46:57.133897054Z" level=info msg="StartContainer for \"095d87773bb89607c5b9702a1578c80abed5d7e2d0b315871ebed8f12e2c4e91\" returns successfully" Sep 9 23:46:57.617759 containerd[2031]: time="2025-09-09T23:46:57.617663736Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d05d4ebc560032448e601465388c01af44eb8cf12b9572ae707eb4a97ffb8306\" id:\"a22192d6ebc5d78e538d30fb644f5158a0e4e7e9f1679e01bbfe5dbcbb87737c\" pid:6535 exited_at:{seconds:1757461617 nanos:615240828}" Sep 9 23:46:57.795732 kubelet[3474]: E0909 23:46:57.795475 3474 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-206?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 9 23:46:58.088320 containerd[2031]: time="2025-09-09T23:46:58.088259015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d05d4ebc560032448e601465388c01af44eb8cf12b9572ae707eb4a97ffb8306\" id:\"a348bca399155e720c443eb7d216e3726fcb51bc6613f31150017037913bb6ee\" pid:6566 exited_at:{seconds:1757461618 nanos:87572231}" Sep 9 23:47:00.847320 systemd[1]: cri-containerd-655b5f0f48b3c31e32fba70a3d90ad9c1cb1dcb6626dca6e91b01a58517c3a62.scope: Deactivated successfully. Sep 9 23:47:00.847900 systemd[1]: cri-containerd-655b5f0f48b3c31e32fba70a3d90ad9c1cb1dcb6626dca6e91b01a58517c3a62.scope: Consumed 5.661s CPU time, 21.1M memory peak, 192K read from disk. Sep 9 23:47:00.851789 containerd[2031]: time="2025-09-09T23:47:00.851712760Z" level=info msg="received exit event container_id:\"655b5f0f48b3c31e32fba70a3d90ad9c1cb1dcb6626dca6e91b01a58517c3a62\" id:\"655b5f0f48b3c31e32fba70a3d90ad9c1cb1dcb6626dca6e91b01a58517c3a62\" pid:3222 exit_status:1 exited_at:{seconds:1757461620 nanos:850991452}" Sep 9 23:47:00.853483 containerd[2031]: time="2025-09-09T23:47:00.852567964Z" level=info msg="TaskExit event in podsandbox handler container_id:\"655b5f0f48b3c31e32fba70a3d90ad9c1cb1dcb6626dca6e91b01a58517c3a62\" id:\"655b5f0f48b3c31e32fba70a3d90ad9c1cb1dcb6626dca6e91b01a58517c3a62\" pid:3222 exit_status:1 exited_at:{seconds:1757461620 nanos:850991452}" Sep 9 23:47:00.912199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-655b5f0f48b3c31e32fba70a3d90ad9c1cb1dcb6626dca6e91b01a58517c3a62-rootfs.mount: Deactivated successfully. Sep 9 23:47:00.974838 kubelet[3474]: I0909 23:47:00.974774 3474 scope.go:117] "RemoveContainer" containerID="655b5f0f48b3c31e32fba70a3d90ad9c1cb1dcb6626dca6e91b01a58517c3a62" Sep 9 23:47:00.980651 containerd[2031]: time="2025-09-09T23:47:00.980587709Z" level=info msg="CreateContainer within sandbox \"6a25064ba4c128550325989521521cc6fa6dbebc11ebf26e2720418e2702fbdb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 9 23:47:00.998494 containerd[2031]: time="2025-09-09T23:47:00.998430665Z" level=info msg="Container 76fb82a1cde0c380cf6ef13467ddc59bef89155a868224bc3186a672680078d6: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:47:01.040061 containerd[2031]: time="2025-09-09T23:47:01.039972781Z" level=info msg="CreateContainer within sandbox \"6a25064ba4c128550325989521521cc6fa6dbebc11ebf26e2720418e2702fbdb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"76fb82a1cde0c380cf6ef13467ddc59bef89155a868224bc3186a672680078d6\"" Sep 9 23:47:01.041296 containerd[2031]: time="2025-09-09T23:47:01.041243269Z" level=info msg="StartContainer for \"76fb82a1cde0c380cf6ef13467ddc59bef89155a868224bc3186a672680078d6\"" Sep 9 23:47:01.044143 containerd[2031]: time="2025-09-09T23:47:01.043756873Z" level=info msg="connecting to shim 76fb82a1cde0c380cf6ef13467ddc59bef89155a868224bc3186a672680078d6" address="unix:///run/containerd/s/622738bb0cbeb08e7fdc14b66fd2258ff7de73984b711a778ac47479606b846c" protocol=ttrpc version=3 Sep 9 23:47:01.090779 systemd[1]: Started cri-containerd-76fb82a1cde0c380cf6ef13467ddc59bef89155a868224bc3186a672680078d6.scope - libcontainer container 76fb82a1cde0c380cf6ef13467ddc59bef89155a868224bc3186a672680078d6. Sep 9 23:47:01.194742 containerd[2031]: time="2025-09-09T23:47:01.194591234Z" level=info msg="StartContainer for \"76fb82a1cde0c380cf6ef13467ddc59bef89155a868224bc3186a672680078d6\" returns successfully"