Sep 3 23:24:15.119736 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 3 23:24:15.119780 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 3 22:04:24 -00 2025 Sep 3 23:24:15.119805 kernel: KASLR disabled due to lack of seed Sep 3 23:24:15.119821 kernel: efi: EFI v2.7 by EDK II Sep 3 23:24:15.119836 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Sep 3 23:24:15.119851 kernel: secureboot: Secure boot disabled Sep 3 23:24:15.119869 kernel: ACPI: Early table checksum verification disabled Sep 3 23:24:15.119883 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 3 23:24:15.119899 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 3 23:24:15.119914 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 3 23:24:15.119929 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 3 23:24:15.119948 kernel: ACPI: FACS 0x0000000078630000 000040 Sep 3 23:24:15.119962 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 3 23:24:15.119977 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 3 23:24:15.119995 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 3 23:24:15.120011 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 3 23:24:15.120031 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 3 23:24:15.120047 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 3 23:24:15.120062 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 3 23:24:15.120078 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 3 23:24:15.120094 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 3 23:24:15.120109 kernel: printk: legacy bootconsole [uart0] enabled Sep 3 23:24:15.120125 kernel: ACPI: Use ACPI SPCR as default console: No Sep 3 23:24:15.120141 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 3 23:24:15.120157 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Sep 3 23:24:15.120172 kernel: Zone ranges: Sep 3 23:24:15.120188 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 3 23:24:15.120207 kernel: DMA32 empty Sep 3 23:24:15.120223 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 3 23:24:15.120238 kernel: Device empty Sep 3 23:24:15.120254 kernel: Movable zone start for each node Sep 3 23:24:15.120269 kernel: Early memory node ranges Sep 3 23:24:15.120285 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 3 23:24:15.120300 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 3 23:24:15.120317 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 3 23:24:15.120332 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 3 23:24:15.120348 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 3 23:24:15.120363 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 3 23:24:15.120379 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 3 23:24:15.120398 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 3 23:24:15.120421 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 3 23:24:15.120437 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 3 23:24:15.120454 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Sep 3 23:24:15.120491 kernel: psci: probing for conduit method from ACPI. Sep 3 23:24:15.120516 kernel: psci: PSCIv1.0 detected in firmware. Sep 3 23:24:15.120533 kernel: psci: Using standard PSCI v0.2 function IDs Sep 3 23:24:15.120550 kernel: psci: Trusted OS migration not required Sep 3 23:24:15.120566 kernel: psci: SMC Calling Convention v1.1 Sep 3 23:24:15.120583 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 3 23:24:15.120599 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 3 23:24:15.120616 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 3 23:24:15.120658 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 3 23:24:15.120682 kernel: Detected PIPT I-cache on CPU0 Sep 3 23:24:15.120700 kernel: CPU features: detected: GIC system register CPU interface Sep 3 23:24:15.120717 kernel: CPU features: detected: Spectre-v2 Sep 3 23:24:15.120739 kernel: CPU features: detected: Spectre-v3a Sep 3 23:24:15.120756 kernel: CPU features: detected: Spectre-BHB Sep 3 23:24:15.120772 kernel: CPU features: detected: ARM erratum 1742098 Sep 3 23:24:15.120789 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 3 23:24:15.120805 kernel: alternatives: applying boot alternatives Sep 3 23:24:15.120824 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:24:15.120842 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 3 23:24:15.120859 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 3 23:24:15.120888 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 3 23:24:15.120909 kernel: Fallback order for Node 0: 0 Sep 3 23:24:15.120932 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Sep 3 23:24:15.120948 kernel: Policy zone: Normal Sep 3 23:24:15.120965 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 3 23:24:15.120981 kernel: software IO TLB: area num 2. Sep 3 23:24:15.120998 kernel: software IO TLB: mapped [mem 0x000000006c5f0000-0x00000000705f0000] (64MB) Sep 3 23:24:15.121014 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 3 23:24:15.121031 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 3 23:24:15.121048 kernel: rcu: RCU event tracing is enabled. Sep 3 23:24:15.121066 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 3 23:24:15.121083 kernel: Trampoline variant of Tasks RCU enabled. Sep 3 23:24:15.121100 kernel: Tracing variant of Tasks RCU enabled. Sep 3 23:24:15.121116 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 3 23:24:15.121138 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 3 23:24:15.121155 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 3 23:24:15.121171 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 3 23:24:15.121188 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 3 23:24:15.121204 kernel: GICv3: 96 SPIs implemented Sep 3 23:24:15.121221 kernel: GICv3: 0 Extended SPIs implemented Sep 3 23:24:15.121238 kernel: Root IRQ handler: gic_handle_irq Sep 3 23:24:15.121254 kernel: GICv3: GICv3 features: 16 PPIs Sep 3 23:24:15.121271 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 3 23:24:15.121287 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 3 23:24:15.121304 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 3 23:24:15.121320 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Sep 3 23:24:15.121341 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Sep 3 23:24:15.121358 kernel: GICv3: using LPI property table @0x0000000400110000 Sep 3 23:24:15.121374 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 3 23:24:15.121390 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Sep 3 23:24:15.121407 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 3 23:24:15.121423 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 3 23:24:15.121439 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 3 23:24:15.121472 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 3 23:24:15.121492 kernel: Console: colour dummy device 80x25 Sep 3 23:24:15.121510 kernel: printk: legacy console [tty1] enabled Sep 3 23:24:15.121527 kernel: ACPI: Core revision 20240827 Sep 3 23:24:15.121550 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 3 23:24:15.121567 kernel: pid_max: default: 32768 minimum: 301 Sep 3 23:24:15.121584 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 3 23:24:15.121601 kernel: landlock: Up and running. Sep 3 23:24:15.121617 kernel: SELinux: Initializing. Sep 3 23:24:15.122026 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:24:15.122052 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:24:15.122069 kernel: rcu: Hierarchical SRCU implementation. Sep 3 23:24:15.122087 kernel: rcu: Max phase no-delay instances is 400. Sep 3 23:24:15.122110 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 3 23:24:15.122127 kernel: Remapping and enabling EFI services. Sep 3 23:24:15.122144 kernel: smp: Bringing up secondary CPUs ... Sep 3 23:24:15.122160 kernel: Detected PIPT I-cache on CPU1 Sep 3 23:24:15.122177 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 3 23:24:15.122194 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Sep 3 23:24:15.122211 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 3 23:24:15.122227 kernel: smp: Brought up 1 node, 2 CPUs Sep 3 23:24:15.122244 kernel: SMP: Total of 2 processors activated. Sep 3 23:24:15.122274 kernel: CPU: All CPU(s) started at EL1 Sep 3 23:24:15.122291 kernel: CPU features: detected: 32-bit EL0 Support Sep 3 23:24:15.122312 kernel: CPU features: detected: 32-bit EL1 Support Sep 3 23:24:15.122330 kernel: CPU features: detected: CRC32 instructions Sep 3 23:24:15.122363 kernel: alternatives: applying system-wide alternatives Sep 3 23:24:15.122383 kernel: Memory: 3797032K/4030464K available (11136K kernel code, 2436K rwdata, 9076K rodata, 38976K init, 1038K bss, 212088K reserved, 16384K cma-reserved) Sep 3 23:24:15.122401 kernel: devtmpfs: initialized Sep 3 23:24:15.122424 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 3 23:24:15.122442 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 3 23:24:15.122460 kernel: 17040 pages in range for non-PLT usage Sep 3 23:24:15.122477 kernel: 508560 pages in range for PLT usage Sep 3 23:24:15.122494 kernel: pinctrl core: initialized pinctrl subsystem Sep 3 23:24:15.122512 kernel: SMBIOS 3.0.0 present. Sep 3 23:24:15.122529 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 3 23:24:15.122547 kernel: DMI: Memory slots populated: 0/0 Sep 3 23:24:15.122577 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 3 23:24:15.122603 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 3 23:24:15.122622 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 3 23:24:15.122662 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 3 23:24:15.122683 kernel: audit: initializing netlink subsys (disabled) Sep 3 23:24:15.122701 kernel: audit: type=2000 audit(0.228:1): state=initialized audit_enabled=0 res=1 Sep 3 23:24:15.122718 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 3 23:24:15.122736 kernel: cpuidle: using governor menu Sep 3 23:24:15.122754 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 3 23:24:15.122771 kernel: ASID allocator initialised with 65536 entries Sep 3 23:24:15.122794 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 3 23:24:15.122812 kernel: Serial: AMBA PL011 UART driver Sep 3 23:24:15.122830 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 3 23:24:15.122847 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 3 23:24:15.122865 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 3 23:24:15.122883 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 3 23:24:15.122900 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 3 23:24:15.122919 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 3 23:24:15.122937 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 3 23:24:15.122959 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 3 23:24:15.122977 kernel: ACPI: Added _OSI(Module Device) Sep 3 23:24:15.122995 kernel: ACPI: Added _OSI(Processor Device) Sep 3 23:24:15.123013 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 3 23:24:15.123030 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 3 23:24:15.123048 kernel: ACPI: Interpreter enabled Sep 3 23:24:15.123065 kernel: ACPI: Using GIC for interrupt routing Sep 3 23:24:15.123082 kernel: ACPI: MCFG table detected, 1 entries Sep 3 23:24:15.123100 kernel: ACPI: CPU0 has been hot-added Sep 3 23:24:15.123121 kernel: ACPI: CPU1 has been hot-added Sep 3 23:24:15.123139 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 3 23:24:15.123427 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 3 23:24:15.123703 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 3 23:24:15.124593 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 3 23:24:15.127142 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 3 23:24:15.127343 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 3 23:24:15.127378 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 3 23:24:15.127397 kernel: acpiphp: Slot [1] registered Sep 3 23:24:15.127415 kernel: acpiphp: Slot [2] registered Sep 3 23:24:15.127433 kernel: acpiphp: Slot [3] registered Sep 3 23:24:15.127450 kernel: acpiphp: Slot [4] registered Sep 3 23:24:15.127467 kernel: acpiphp: Slot [5] registered Sep 3 23:24:15.127485 kernel: acpiphp: Slot [6] registered Sep 3 23:24:15.127502 kernel: acpiphp: Slot [7] registered Sep 3 23:24:15.127519 kernel: acpiphp: Slot [8] registered Sep 3 23:24:15.127536 kernel: acpiphp: Slot [9] registered Sep 3 23:24:15.127558 kernel: acpiphp: Slot [10] registered Sep 3 23:24:15.127575 kernel: acpiphp: Slot [11] registered Sep 3 23:24:15.127593 kernel: acpiphp: Slot [12] registered Sep 3 23:24:15.127610 kernel: acpiphp: Slot [13] registered Sep 3 23:24:15.127627 kernel: acpiphp: Slot [14] registered Sep 3 23:24:15.127666 kernel: acpiphp: Slot [15] registered Sep 3 23:24:15.127686 kernel: acpiphp: Slot [16] registered Sep 3 23:24:15.127704 kernel: acpiphp: Slot [17] registered Sep 3 23:24:15.127722 kernel: acpiphp: Slot [18] registered Sep 3 23:24:15.127745 kernel: acpiphp: Slot [19] registered Sep 3 23:24:15.127763 kernel: acpiphp: Slot [20] registered Sep 3 23:24:15.127781 kernel: acpiphp: Slot [21] registered Sep 3 23:24:15.127800 kernel: acpiphp: Slot [22] registered Sep 3 23:24:15.127817 kernel: acpiphp: Slot [23] registered Sep 3 23:24:15.127834 kernel: acpiphp: Slot [24] registered Sep 3 23:24:15.127852 kernel: acpiphp: Slot [25] registered Sep 3 23:24:15.127870 kernel: acpiphp: Slot [26] registered Sep 3 23:24:15.127888 kernel: acpiphp: Slot [27] registered Sep 3 23:24:15.127906 kernel: acpiphp: Slot [28] registered Sep 3 23:24:15.127927 kernel: acpiphp: Slot [29] registered Sep 3 23:24:15.127945 kernel: acpiphp: Slot [30] registered Sep 3 23:24:15.127963 kernel: acpiphp: Slot [31] registered Sep 3 23:24:15.127981 kernel: PCI host bridge to bus 0000:00 Sep 3 23:24:15.129898 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 3 23:24:15.130089 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 3 23:24:15.130263 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 3 23:24:15.130444 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 3 23:24:15.130796 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Sep 3 23:24:15.131029 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Sep 3 23:24:15.131227 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Sep 3 23:24:15.131434 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Sep 3 23:24:15.131631 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Sep 3 23:24:15.131860 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 3 23:24:15.132079 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Sep 3 23:24:15.132317 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Sep 3 23:24:15.132559 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Sep 3 23:24:15.132813 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Sep 3 23:24:15.133012 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 3 23:24:15.133205 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Sep 3 23:24:15.133398 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Sep 3 23:24:15.133631 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Sep 3 23:24:15.135980 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Sep 3 23:24:15.136187 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Sep 3 23:24:15.136370 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 3 23:24:15.136567 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 3 23:24:15.136791 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 3 23:24:15.136838 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 3 23:24:15.136860 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 3 23:24:15.136878 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 3 23:24:15.136896 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 3 23:24:15.136914 kernel: iommu: Default domain type: Translated Sep 3 23:24:15.136932 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 3 23:24:15.136949 kernel: efivars: Registered efivars operations Sep 3 23:24:15.136967 kernel: vgaarb: loaded Sep 3 23:24:15.136984 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 3 23:24:15.137001 kernel: VFS: Disk quotas dquot_6.6.0 Sep 3 23:24:15.137025 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 3 23:24:15.137042 kernel: pnp: PnP ACPI init Sep 3 23:24:15.137253 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 3 23:24:15.137279 kernel: pnp: PnP ACPI: found 1 devices Sep 3 23:24:15.137298 kernel: NET: Registered PF_INET protocol family Sep 3 23:24:15.137316 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 3 23:24:15.137333 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 3 23:24:15.137351 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 3 23:24:15.137374 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 3 23:24:15.137392 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 3 23:24:15.137409 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 3 23:24:15.137427 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:24:15.137445 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:24:15.137478 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 3 23:24:15.137497 kernel: PCI: CLS 0 bytes, default 64 Sep 3 23:24:15.137515 kernel: kvm [1]: HYP mode not available Sep 3 23:24:15.137533 kernel: Initialise system trusted keyrings Sep 3 23:24:15.137557 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 3 23:24:15.137575 kernel: Key type asymmetric registered Sep 3 23:24:15.137593 kernel: Asymmetric key parser 'x509' registered Sep 3 23:24:15.137610 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 3 23:24:15.137628 kernel: io scheduler mq-deadline registered Sep 3 23:24:15.137667 kernel: io scheduler kyber registered Sep 3 23:24:15.137686 kernel: io scheduler bfq registered Sep 3 23:24:15.137901 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 3 23:24:15.137933 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 3 23:24:15.137952 kernel: ACPI: button: Power Button [PWRB] Sep 3 23:24:15.137970 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 3 23:24:15.137988 kernel: ACPI: button: Sleep Button [SLPB] Sep 3 23:24:15.138005 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 3 23:24:15.138024 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 3 23:24:15.138223 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 3 23:24:15.138249 kernel: printk: legacy console [ttyS0] disabled Sep 3 23:24:15.138267 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 3 23:24:15.138291 kernel: printk: legacy console [ttyS0] enabled Sep 3 23:24:15.138309 kernel: printk: legacy bootconsole [uart0] disabled Sep 3 23:24:15.138326 kernel: thunder_xcv, ver 1.0 Sep 3 23:24:15.138344 kernel: thunder_bgx, ver 1.0 Sep 3 23:24:15.138362 kernel: nicpf, ver 1.0 Sep 3 23:24:15.138380 kernel: nicvf, ver 1.0 Sep 3 23:24:15.138591 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 3 23:24:15.139870 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-03T23:24:14 UTC (1756941854) Sep 3 23:24:15.139914 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 3 23:24:15.139934 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Sep 3 23:24:15.139952 kernel: watchdog: NMI not fully supported Sep 3 23:24:15.139969 kernel: NET: Registered PF_INET6 protocol family Sep 3 23:24:15.139987 kernel: watchdog: Hard watchdog permanently disabled Sep 3 23:24:15.140005 kernel: Segment Routing with IPv6 Sep 3 23:24:15.140022 kernel: In-situ OAM (IOAM) with IPv6 Sep 3 23:24:15.140040 kernel: NET: Registered PF_PACKET protocol family Sep 3 23:24:15.140058 kernel: Key type dns_resolver registered Sep 3 23:24:15.140080 kernel: registered taskstats version 1 Sep 3 23:24:15.140099 kernel: Loading compiled-in X.509 certificates Sep 3 23:24:15.140117 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 08fc774dab168e64ce30c382a4517d40e72c4744' Sep 3 23:24:15.140135 kernel: Demotion targets for Node 0: null Sep 3 23:24:15.140153 kernel: Key type .fscrypt registered Sep 3 23:24:15.140170 kernel: Key type fscrypt-provisioning registered Sep 3 23:24:15.140188 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 3 23:24:15.140206 kernel: ima: Allocated hash algorithm: sha1 Sep 3 23:24:15.140223 kernel: ima: No architecture policies found Sep 3 23:24:15.140245 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 3 23:24:15.140264 kernel: clk: Disabling unused clocks Sep 3 23:24:15.140283 kernel: PM: genpd: Disabling unused power domains Sep 3 23:24:15.140300 kernel: Warning: unable to open an initial console. Sep 3 23:24:15.144612 kernel: Freeing unused kernel memory: 38976K Sep 3 23:24:15.144652 kernel: Run /init as init process Sep 3 23:24:15.144677 kernel: with arguments: Sep 3 23:24:15.144696 kernel: /init Sep 3 23:24:15.144714 kernel: with environment: Sep 3 23:24:15.144731 kernel: HOME=/ Sep 3 23:24:15.144758 kernel: TERM=linux Sep 3 23:24:15.144776 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 3 23:24:15.144796 systemd[1]: Successfully made /usr/ read-only. Sep 3 23:24:15.144821 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:24:15.144842 systemd[1]: Detected virtualization amazon. Sep 3 23:24:15.144860 systemd[1]: Detected architecture arm64. Sep 3 23:24:15.144882 systemd[1]: Running in initrd. Sep 3 23:24:15.144907 systemd[1]: No hostname configured, using default hostname. Sep 3 23:24:15.144928 systemd[1]: Hostname set to . Sep 3 23:24:15.144948 systemd[1]: Initializing machine ID from VM UUID. Sep 3 23:24:15.144968 systemd[1]: Queued start job for default target initrd.target. Sep 3 23:24:15.144988 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:24:15.145008 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:24:15.145030 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 3 23:24:15.145052 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:24:15.145078 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 3 23:24:15.145119 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 3 23:24:15.145145 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 3 23:24:15.145166 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 3 23:24:15.145186 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:24:15.145207 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:24:15.145228 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:24:15.145254 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:24:15.145274 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:24:15.145293 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:24:15.145312 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:24:15.145331 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:24:15.145350 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 3 23:24:15.145370 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 3 23:24:15.145389 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:24:15.145412 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:24:15.145431 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:24:15.145463 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:24:15.145486 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 3 23:24:15.145506 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:24:15.145525 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 3 23:24:15.145545 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 3 23:24:15.145564 systemd[1]: Starting systemd-fsck-usr.service... Sep 3 23:24:15.145583 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:24:15.145608 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:24:15.145627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:24:15.146144 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 3 23:24:15.146168 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:24:15.146196 systemd[1]: Finished systemd-fsck-usr.service. Sep 3 23:24:15.146216 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 3 23:24:15.146235 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 3 23:24:15.146299 systemd-journald[257]: Collecting audit messages is disabled. Sep 3 23:24:15.146346 kernel: Bridge firewalling registered Sep 3 23:24:15.146382 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:24:15.146403 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:24:15.146423 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:24:15.146443 systemd-journald[257]: Journal started Sep 3 23:24:15.146483 systemd-journald[257]: Runtime Journal (/run/log/journal/ec2cf751400ff87c6531819e9e4a2863) is 8M, max 75.3M, 67.3M free. Sep 3 23:24:15.067360 systemd-modules-load[259]: Inserted module 'overlay' Sep 3 23:24:15.117044 systemd-modules-load[259]: Inserted module 'br_netfilter' Sep 3 23:24:15.161721 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:24:15.164107 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 3 23:24:15.172400 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 3 23:24:15.180516 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:24:15.193073 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:24:15.204764 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:24:15.239925 systemd-tmpfiles[282]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 3 23:24:15.244777 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:24:15.248448 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:24:15.259850 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 3 23:24:15.269798 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:24:15.279833 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:24:15.333430 dracut-cmdline[297]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:24:15.375327 systemd-resolved[299]: Positive Trust Anchors: Sep 3 23:24:15.377536 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:24:15.380786 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:24:15.489675 kernel: SCSI subsystem initialized Sep 3 23:24:15.497677 kernel: Loading iSCSI transport class v2.0-870. Sep 3 23:24:15.509685 kernel: iscsi: registered transport (tcp) Sep 3 23:24:15.531683 kernel: iscsi: registered transport (qla4xxx) Sep 3 23:24:15.531767 kernel: QLogic iSCSI HBA Driver Sep 3 23:24:15.564809 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:24:15.602126 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:24:15.614253 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:24:15.657697 kernel: random: crng init done Sep 3 23:24:15.658278 systemd-resolved[299]: Defaulting to hostname 'linux'. Sep 3 23:24:15.662060 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:24:15.665813 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:24:15.696236 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 3 23:24:15.703304 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 3 23:24:15.790704 kernel: raid6: neonx8 gen() 6590 MB/s Sep 3 23:24:15.807671 kernel: raid6: neonx4 gen() 6580 MB/s Sep 3 23:24:15.824670 kernel: raid6: neonx2 gen() 5465 MB/s Sep 3 23:24:15.841674 kernel: raid6: neonx1 gen() 3943 MB/s Sep 3 23:24:15.858694 kernel: raid6: int64x8 gen() 3641 MB/s Sep 3 23:24:15.875690 kernel: raid6: int64x4 gen() 3692 MB/s Sep 3 23:24:15.892672 kernel: raid6: int64x2 gen() 3594 MB/s Sep 3 23:24:15.910643 kernel: raid6: int64x1 gen() 2739 MB/s Sep 3 23:24:15.910687 kernel: raid6: using algorithm neonx8 gen() 6590 MB/s Sep 3 23:24:15.928675 kernel: raid6: .... xor() 4574 MB/s, rmw enabled Sep 3 23:24:15.928718 kernel: raid6: using neon recovery algorithm Sep 3 23:24:15.937354 kernel: xor: measuring software checksum speed Sep 3 23:24:15.937410 kernel: 8regs : 12911 MB/sec Sep 3 23:24:15.938517 kernel: 32regs : 13043 MB/sec Sep 3 23:24:15.940881 kernel: arm64_neon : 8594 MB/sec Sep 3 23:24:15.940916 kernel: xor: using function: 32regs (13043 MB/sec) Sep 3 23:24:16.032683 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 3 23:24:16.044725 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:24:16.053735 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:24:16.116235 systemd-udevd[507]: Using default interface naming scheme 'v255'. Sep 3 23:24:16.128539 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:24:16.134917 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 3 23:24:16.176690 dracut-pre-trigger[514]: rd.md=0: removing MD RAID activation Sep 3 23:24:16.220506 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:24:16.227531 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:24:16.359095 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:24:16.368380 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 3 23:24:16.522993 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 3 23:24:16.523060 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 3 23:24:16.535687 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 3 23:24:16.536010 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 3 23:24:16.545707 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 3 23:24:16.545779 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 3 23:24:16.550720 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:53:cc:ce:02:5d Sep 3 23:24:16.559678 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 3 23:24:16.571126 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 3 23:24:16.571781 kernel: GPT:9289727 != 16777215 Sep 3 23:24:16.571810 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 3 23:24:16.571461 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:24:16.580612 kernel: GPT:9289727 != 16777215 Sep 3 23:24:16.580668 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 3 23:24:16.580697 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 3 23:24:16.571708 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:24:16.580735 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:24:16.588996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:24:16.593321 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:24:16.603603 (udev-worker)[573]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:24:16.644846 kernel: nvme nvme0: using unchecked data buffer Sep 3 23:24:16.649768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:24:16.788137 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 3 23:24:16.844706 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 3 23:24:16.849686 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 3 23:24:16.890984 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 3 23:24:16.892335 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 3 23:24:16.921393 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 3 23:24:16.924096 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:24:16.931552 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:24:16.939365 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:24:16.944920 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 3 23:24:16.949757 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 3 23:24:16.983852 disk-uuid[686]: Primary Header is updated. Sep 3 23:24:16.983852 disk-uuid[686]: Secondary Entries is updated. Sep 3 23:24:16.983852 disk-uuid[686]: Secondary Header is updated. Sep 3 23:24:16.999657 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:24:17.006680 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 3 23:24:18.018661 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 3 23:24:18.022059 disk-uuid[690]: The operation has completed successfully. Sep 3 23:24:18.203664 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 3 23:24:18.207745 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 3 23:24:18.312867 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 3 23:24:18.345540 sh[954]: Success Sep 3 23:24:18.375283 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 3 23:24:18.375357 kernel: device-mapper: uevent: version 1.0.3 Sep 3 23:24:18.376185 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 3 23:24:18.389665 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 3 23:24:18.492954 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 3 23:24:18.500856 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 3 23:24:18.523755 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 3 23:24:18.542717 kernel: BTRFS: device fsid e8b97e78-d30f-4a41-b431-d82f3afef949 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (977) Sep 3 23:24:18.546573 kernel: BTRFS info (device dm-0): first mount of filesystem e8b97e78-d30f-4a41-b431-d82f3afef949 Sep 3 23:24:18.546649 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:24:18.574904 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 3 23:24:18.574974 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 3 23:24:18.575001 kernel: BTRFS info (device dm-0): enabling free space tree Sep 3 23:24:18.589198 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 3 23:24:18.589994 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:24:18.598259 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 3 23:24:18.604821 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 3 23:24:18.610834 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 3 23:24:18.661691 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:8) scanned by mount (1011) Sep 3 23:24:18.666537 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:24:18.666617 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:24:18.684170 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 3 23:24:18.684252 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 3 23:24:18.691722 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:24:18.693550 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 3 23:24:18.699404 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 3 23:24:18.788212 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:24:18.798893 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:24:18.873514 systemd-networkd[1146]: lo: Link UP Sep 3 23:24:18.873535 systemd-networkd[1146]: lo: Gained carrier Sep 3 23:24:18.876227 systemd-networkd[1146]: Enumeration completed Sep 3 23:24:18.876373 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:24:18.877374 systemd-networkd[1146]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:24:18.877381 systemd-networkd[1146]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:24:18.882474 systemd[1]: Reached target network.target - Network. Sep 3 23:24:18.886119 systemd-networkd[1146]: eth0: Link UP Sep 3 23:24:18.886126 systemd-networkd[1146]: eth0: Gained carrier Sep 3 23:24:18.886149 systemd-networkd[1146]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:24:18.917754 systemd-networkd[1146]: eth0: DHCPv4 address 172.31.22.232/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 3 23:24:19.145972 ignition[1079]: Ignition 2.21.0 Sep 3 23:24:19.146002 ignition[1079]: Stage: fetch-offline Sep 3 23:24:19.149476 ignition[1079]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:24:19.149513 ignition[1079]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:24:19.154148 ignition[1079]: Ignition finished successfully Sep 3 23:24:19.156905 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:24:19.163965 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 3 23:24:19.207233 ignition[1158]: Ignition 2.21.0 Sep 3 23:24:19.207745 ignition[1158]: Stage: fetch Sep 3 23:24:19.208653 ignition[1158]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:24:19.208681 ignition[1158]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:24:19.209238 ignition[1158]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:24:19.236379 ignition[1158]: PUT result: OK Sep 3 23:24:19.240390 ignition[1158]: parsed url from cmdline: "" Sep 3 23:24:19.240541 ignition[1158]: no config URL provided Sep 3 23:24:19.240563 ignition[1158]: reading system config file "/usr/lib/ignition/user.ign" Sep 3 23:24:19.241149 ignition[1158]: no config at "/usr/lib/ignition/user.ign" Sep 3 23:24:19.241230 ignition[1158]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:24:19.244970 ignition[1158]: PUT result: OK Sep 3 23:24:19.250815 ignition[1158]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 3 23:24:19.254110 ignition[1158]: GET result: OK Sep 3 23:24:19.255707 ignition[1158]: parsing config with SHA512: a2de8bb45e10ac0a6926d646e3502b14586cfcea357d4c00376ca5d5091d6381e25a3107722fd0d8cd59a5e1cbf8a05809ecd945cb6e1f834729630251c7a588 Sep 3 23:24:19.268888 unknown[1158]: fetched base config from "system" Sep 3 23:24:19.269571 ignition[1158]: fetch: fetch complete Sep 3 23:24:19.268910 unknown[1158]: fetched base config from "system" Sep 3 23:24:19.269582 ignition[1158]: fetch: fetch passed Sep 3 23:24:19.268922 unknown[1158]: fetched user config from "aws" Sep 3 23:24:19.269954 ignition[1158]: Ignition finished successfully Sep 3 23:24:19.276583 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 3 23:24:19.284045 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 3 23:24:19.325845 ignition[1164]: Ignition 2.21.0 Sep 3 23:24:19.325876 ignition[1164]: Stage: kargs Sep 3 23:24:19.326966 ignition[1164]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:24:19.326991 ignition[1164]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:24:19.327136 ignition[1164]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:24:19.329224 ignition[1164]: PUT result: OK Sep 3 23:24:19.343060 ignition[1164]: kargs: kargs passed Sep 3 23:24:19.343235 ignition[1164]: Ignition finished successfully Sep 3 23:24:19.349758 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 3 23:24:19.353932 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 3 23:24:19.408197 ignition[1171]: Ignition 2.21.0 Sep 3 23:24:19.408228 ignition[1171]: Stage: disks Sep 3 23:24:19.409224 ignition[1171]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:24:19.409813 ignition[1171]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:24:19.409966 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:24:19.411983 ignition[1171]: PUT result: OK Sep 3 23:24:19.425106 ignition[1171]: disks: disks passed Sep 3 23:24:19.425346 ignition[1171]: Ignition finished successfully Sep 3 23:24:19.431716 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 3 23:24:19.436027 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 3 23:24:19.440829 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 3 23:24:19.445826 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:24:19.448229 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:24:19.455058 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:24:19.459440 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 3 23:24:19.528681 systemd-fsck[1180]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 3 23:24:19.536575 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 3 23:24:19.543744 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 3 23:24:19.683670 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d953e3b7-a0cb-45f7-b3a7-216a9a578dda r/w with ordered data mode. Quota mode: none. Sep 3 23:24:19.685508 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 3 23:24:19.689823 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 3 23:24:19.696779 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:24:19.701672 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 3 23:24:19.706296 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 3 23:24:19.711743 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 3 23:24:19.711850 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:24:19.733550 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 3 23:24:19.739677 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 3 23:24:19.757671 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:8) scanned by mount (1199) Sep 3 23:24:19.763377 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:24:19.763459 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:24:19.770579 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 3 23:24:19.770710 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 3 23:24:19.773035 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:24:20.035331 initrd-setup-root[1223]: cut: /sysroot/etc/passwd: No such file or directory Sep 3 23:24:20.045451 initrd-setup-root[1230]: cut: /sysroot/etc/group: No such file or directory Sep 3 23:24:20.054940 initrd-setup-root[1237]: cut: /sysroot/etc/shadow: No such file or directory Sep 3 23:24:20.063807 initrd-setup-root[1244]: cut: /sysroot/etc/gshadow: No such file or directory Sep 3 23:24:20.124978 systemd-networkd[1146]: eth0: Gained IPv6LL Sep 3 23:24:20.246505 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 3 23:24:20.251583 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 3 23:24:20.262293 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 3 23:24:20.281392 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 3 23:24:20.287588 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:24:20.321561 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 3 23:24:20.342679 ignition[1312]: INFO : Ignition 2.21.0 Sep 3 23:24:20.342679 ignition[1312]: INFO : Stage: mount Sep 3 23:24:20.346204 ignition[1312]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:24:20.346204 ignition[1312]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:24:20.346204 ignition[1312]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:24:20.354577 ignition[1312]: INFO : PUT result: OK Sep 3 23:24:20.361914 ignition[1312]: INFO : mount: mount passed Sep 3 23:24:20.361914 ignition[1312]: INFO : Ignition finished successfully Sep 3 23:24:20.367721 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 3 23:24:20.372863 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 3 23:24:20.690001 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:24:20.725668 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:8) scanned by mount (1325) Sep 3 23:24:20.729916 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:24:20.730030 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:24:20.737034 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 3 23:24:20.737129 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 3 23:24:20.740591 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:24:20.789965 ignition[1342]: INFO : Ignition 2.21.0 Sep 3 23:24:20.789965 ignition[1342]: INFO : Stage: files Sep 3 23:24:20.793589 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:24:20.793589 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:24:20.798526 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:24:20.803675 ignition[1342]: INFO : PUT result: OK Sep 3 23:24:20.810066 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping Sep 3 23:24:20.818063 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 3 23:24:20.818063 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 3 23:24:20.828774 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 3 23:24:20.831997 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 3 23:24:20.835427 unknown[1342]: wrote ssh authorized keys file for user: core Sep 3 23:24:20.838750 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 3 23:24:20.842246 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 3 23:24:20.846620 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 3 23:24:20.898773 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 3 23:24:21.178947 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 3 23:24:21.183268 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 3 23:24:21.187107 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 3 23:24:21.187107 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:24:21.194759 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:24:21.198584 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:24:21.202981 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:24:21.206860 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:24:21.210832 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:24:21.219326 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:24:21.223702 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:24:21.223702 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 3 23:24:21.233290 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 3 23:24:21.238796 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 3 23:24:21.238796 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 3 23:24:21.820612 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 3 23:24:22.222369 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 3 23:24:22.222369 ignition[1342]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 3 23:24:22.230379 ignition[1342]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:24:22.239113 ignition[1342]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:24:22.239113 ignition[1342]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 3 23:24:22.239113 ignition[1342]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 3 23:24:22.251016 ignition[1342]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 3 23:24:22.251016 ignition[1342]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:24:22.251016 ignition[1342]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:24:22.251016 ignition[1342]: INFO : files: files passed Sep 3 23:24:22.251016 ignition[1342]: INFO : Ignition finished successfully Sep 3 23:24:22.244761 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 3 23:24:22.262626 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 3 23:24:22.271039 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 3 23:24:22.297822 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 3 23:24:22.300371 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 3 23:24:22.315165 initrd-setup-root-after-ignition[1375]: grep: Sep 3 23:24:22.315165 initrd-setup-root-after-ignition[1371]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:24:22.315165 initrd-setup-root-after-ignition[1371]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:24:22.324227 initrd-setup-root-after-ignition[1375]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:24:22.330180 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:24:22.336902 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 3 23:24:22.343308 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 3 23:24:22.415689 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 3 23:24:22.415991 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 3 23:24:22.423820 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 3 23:24:22.428967 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 3 23:24:22.431399 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 3 23:24:22.433409 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 3 23:24:22.474701 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:24:22.480407 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 3 23:24:22.518357 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:24:22.523854 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:24:22.527450 systemd[1]: Stopped target timers.target - Timer Units. Sep 3 23:24:22.531517 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 3 23:24:22.531908 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:24:22.538371 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 3 23:24:22.541053 systemd[1]: Stopped target basic.target - Basic System. Sep 3 23:24:22.545508 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 3 23:24:22.549340 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:24:22.556339 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 3 23:24:22.559122 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:24:22.564206 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 3 23:24:22.568745 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:24:22.571976 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 3 23:24:22.581087 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 3 23:24:22.588856 systemd[1]: Stopped target swap.target - Swaps. Sep 3 23:24:22.592339 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 3 23:24:22.592607 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:24:22.598109 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:24:22.602435 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:24:22.610143 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 3 23:24:22.613756 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:24:22.616497 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 3 23:24:22.616758 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 3 23:24:22.625402 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 3 23:24:22.626049 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:24:22.630467 systemd[1]: ignition-files.service: Deactivated successfully. Sep 3 23:24:22.631002 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 3 23:24:22.644996 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 3 23:24:22.647343 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 3 23:24:22.647694 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:24:22.662257 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 3 23:24:22.665772 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 3 23:24:22.666066 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:24:22.670950 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 3 23:24:22.679730 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:24:22.703173 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 3 23:24:22.707757 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 3 23:24:22.718773 ignition[1395]: INFO : Ignition 2.21.0 Sep 3 23:24:22.718773 ignition[1395]: INFO : Stage: umount Sep 3 23:24:22.726981 ignition[1395]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:24:22.726981 ignition[1395]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:24:22.726981 ignition[1395]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:24:22.739360 ignition[1395]: INFO : PUT result: OK Sep 3 23:24:22.745117 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 3 23:24:22.748342 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 3 23:24:22.748549 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 3 23:24:22.756865 ignition[1395]: INFO : umount: umount passed Sep 3 23:24:22.756865 ignition[1395]: INFO : Ignition finished successfully Sep 3 23:24:22.762596 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 3 23:24:22.763041 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 3 23:24:22.770728 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 3 23:24:22.770896 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 3 23:24:22.777413 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 3 23:24:22.777540 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 3 23:24:22.783781 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 3 23:24:22.783883 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 3 23:24:22.788231 systemd[1]: Stopped target network.target - Network. Sep 3 23:24:22.794000 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 3 23:24:22.794106 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:24:22.797169 systemd[1]: Stopped target paths.target - Path Units. Sep 3 23:24:22.804503 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 3 23:24:22.806434 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:24:22.809586 systemd[1]: Stopped target slices.target - Slice Units. Sep 3 23:24:22.811967 systemd[1]: Stopped target sockets.target - Socket Units. Sep 3 23:24:22.819897 systemd[1]: iscsid.socket: Deactivated successfully. Sep 3 23:24:22.819975 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:24:22.823448 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 3 23:24:22.823529 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:24:22.827116 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 3 23:24:22.827244 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 3 23:24:22.830356 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 3 23:24:22.830462 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 3 23:24:22.836326 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 3 23:24:22.836466 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 3 23:24:22.839752 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 3 23:24:22.847589 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 3 23:24:22.861223 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 3 23:24:22.861433 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 3 23:24:22.884534 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 3 23:24:22.885412 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 3 23:24:22.885776 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 3 23:24:22.893347 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 3 23:24:22.895056 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 3 23:24:22.901178 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 3 23:24:22.901258 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:24:22.913868 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 3 23:24:22.920194 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 3 23:24:22.920311 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:24:22.923820 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 3 23:24:22.923923 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:24:22.928575 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 3 23:24:22.928712 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 3 23:24:22.934818 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 3 23:24:22.934926 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:24:22.938800 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:24:22.951597 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 3 23:24:22.951750 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:24:22.985787 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 3 23:24:22.986072 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 3 23:24:22.992325 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 3 23:24:22.992625 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:24:22.997049 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 3 23:24:22.997129 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 3 23:24:23.002189 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 3 23:24:23.002255 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:24:23.008682 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 3 23:24:23.008788 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:24:23.023196 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 3 23:24:23.023307 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 3 23:24:23.030213 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 3 23:24:23.030755 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:24:23.038654 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 3 23:24:23.045201 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 3 23:24:23.045345 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:24:23.049172 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 3 23:24:23.054112 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:24:23.062415 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:24:23.062725 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:24:23.073893 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 3 23:24:23.074023 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 3 23:24:23.077017 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:24:23.088232 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 3 23:24:23.088422 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 3 23:24:23.091748 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 3 23:24:23.097917 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 3 23:24:23.146456 systemd[1]: Switching root. Sep 3 23:24:23.202103 systemd-journald[257]: Journal stopped Sep 3 23:24:25.180975 systemd-journald[257]: Received SIGTERM from PID 1 (systemd). Sep 3 23:24:25.181093 kernel: SELinux: policy capability network_peer_controls=1 Sep 3 23:24:25.181147 kernel: SELinux: policy capability open_perms=1 Sep 3 23:24:25.181182 kernel: SELinux: policy capability extended_socket_class=1 Sep 3 23:24:25.181212 kernel: SELinux: policy capability always_check_network=0 Sep 3 23:24:25.181239 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 3 23:24:25.181268 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 3 23:24:25.181295 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 3 23:24:25.181323 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 3 23:24:25.181352 kernel: SELinux: policy capability userspace_initial_context=0 Sep 3 23:24:25.181379 kernel: audit: type=1403 audit(1756941863.477:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 3 23:24:25.181430 systemd[1]: Successfully loaded SELinux policy in 49.527ms. Sep 3 23:24:25.181483 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.141ms. Sep 3 23:24:25.181514 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:24:25.181546 systemd[1]: Detected virtualization amazon. Sep 3 23:24:25.181576 systemd[1]: Detected architecture arm64. Sep 3 23:24:25.181605 systemd[1]: Detected first boot. Sep 3 23:24:25.190554 systemd[1]: Initializing machine ID from VM UUID. Sep 3 23:24:25.190677 zram_generator::config[1439]: No configuration found. Sep 3 23:24:25.190716 kernel: NET: Registered PF_VSOCK protocol family Sep 3 23:24:25.190749 systemd[1]: Populated /etc with preset unit settings. Sep 3 23:24:25.190789 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 3 23:24:25.190822 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 3 23:24:25.190853 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 3 23:24:25.190882 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 3 23:24:25.190914 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 3 23:24:25.190945 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 3 23:24:25.190976 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 3 23:24:25.191006 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 3 23:24:25.191040 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 3 23:24:25.191071 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 3 23:24:25.191102 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 3 23:24:25.191140 systemd[1]: Created slice user.slice - User and Session Slice. Sep 3 23:24:25.191172 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:24:25.191200 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:24:25.191230 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 3 23:24:25.191260 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 3 23:24:25.191290 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 3 23:24:25.191324 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:24:25.191352 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 3 23:24:25.191382 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:24:25.191431 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:24:25.191462 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 3 23:24:25.191490 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 3 23:24:25.191521 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 3 23:24:25.191550 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 3 23:24:25.191583 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:24:25.191618 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:24:25.191676 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:24:25.191709 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:24:25.191747 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 3 23:24:25.191779 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 3 23:24:25.191807 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 3 23:24:25.191837 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:24:25.191865 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:24:25.191899 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:24:25.191929 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 3 23:24:25.191956 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 3 23:24:25.191986 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 3 23:24:25.192028 systemd[1]: Mounting media.mount - External Media Directory... Sep 3 23:24:25.192062 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 3 23:24:25.192090 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 3 23:24:25.192120 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 3 23:24:25.192149 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 3 23:24:25.192181 systemd[1]: Reached target machines.target - Containers. Sep 3 23:24:25.192211 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 3 23:24:25.192242 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:24:25.192272 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:24:25.192304 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 3 23:24:25.192334 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:24:25.192362 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:24:25.192392 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:24:25.192425 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 3 23:24:25.192480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:24:25.192527 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 3 23:24:25.192558 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 3 23:24:25.192587 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 3 23:24:25.192614 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 3 23:24:25.193273 systemd[1]: Stopped systemd-fsck-usr.service. Sep 3 23:24:25.193320 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:24:25.193358 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:24:25.193386 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:24:25.193417 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:24:25.193448 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 3 23:24:25.193480 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 3 23:24:25.193512 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:24:25.193542 systemd[1]: verity-setup.service: Deactivated successfully. Sep 3 23:24:25.193861 systemd[1]: Stopped verity-setup.service. Sep 3 23:24:25.193891 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 3 23:24:25.193919 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 3 23:24:25.193953 systemd[1]: Mounted media.mount - External Media Directory. Sep 3 23:24:25.193986 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 3 23:24:25.194013 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 3 23:24:25.194040 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 3 23:24:25.194070 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:24:25.194097 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 3 23:24:25.195688 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 3 23:24:25.195740 kernel: fuse: init (API version 7.41) Sep 3 23:24:25.195779 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:24:25.195808 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:24:25.195840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:24:25.195869 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:24:25.195899 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 3 23:24:25.195927 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 3 23:24:25.195954 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:24:25.195985 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 3 23:24:25.196012 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 3 23:24:25.196039 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 3 23:24:25.196074 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 3 23:24:25.196118 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:24:25.196154 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 3 23:24:25.196185 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 3 23:24:25.196213 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:24:25.196241 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 3 23:24:25.196269 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:24:25.196301 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 3 23:24:25.196327 kernel: loop: module loaded Sep 3 23:24:25.196403 systemd-journald[1522]: Collecting audit messages is disabled. Sep 3 23:24:25.196472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:24:25.196506 systemd-journald[1522]: Journal started Sep 3 23:24:25.196555 systemd-journald[1522]: Runtime Journal (/run/log/journal/ec2cf751400ff87c6531819e9e4a2863) is 8M, max 75.3M, 67.3M free. Sep 3 23:24:24.475369 systemd[1]: Queued start job for default target multi-user.target. Sep 3 23:24:24.498257 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 3 23:24:24.499082 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 3 23:24:25.220865 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 3 23:24:25.220947 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:24:25.224340 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:24:25.225738 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:24:25.241970 kernel: ACPI: bus type drm_connector registered Sep 3 23:24:25.233860 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:24:25.238208 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 3 23:24:25.243188 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 3 23:24:25.249583 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:24:25.251080 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:24:25.257263 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 3 23:24:25.293915 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 3 23:24:25.302715 kernel: loop0: detected capacity change from 0 to 138376 Sep 3 23:24:25.310484 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 3 23:24:25.335238 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:24:25.348497 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 3 23:24:25.357478 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:24:25.365968 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 3 23:24:25.374253 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 3 23:24:25.377151 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:24:25.382185 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 3 23:24:25.400085 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 3 23:24:25.439901 kernel: loop1: detected capacity change from 0 to 61240 Sep 3 23:24:25.444513 systemd-journald[1522]: Time spent on flushing to /var/log/journal/ec2cf751400ff87c6531819e9e4a2863 is 70.322ms for 939 entries. Sep 3 23:24:25.444513 systemd-journald[1522]: System Journal (/var/log/journal/ec2cf751400ff87c6531819e9e4a2863) is 8M, max 195.6M, 187.6M free. Sep 3 23:24:25.527075 systemd-journald[1522]: Received client request to flush runtime journal. Sep 3 23:24:25.533475 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 3 23:24:25.544863 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 3 23:24:25.550014 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 3 23:24:25.575666 kernel: loop2: detected capacity change from 0 to 107312 Sep 3 23:24:25.602775 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 3 23:24:25.615011 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:24:25.653139 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:24:25.659715 kernel: loop3: detected capacity change from 0 to 211168 Sep 3 23:24:25.702112 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Sep 3 23:24:25.702153 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Sep 3 23:24:25.726760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:24:25.773704 kernel: loop4: detected capacity change from 0 to 138376 Sep 3 23:24:25.817936 kernel: loop5: detected capacity change from 0 to 61240 Sep 3 23:24:25.841887 kernel: loop6: detected capacity change from 0 to 107312 Sep 3 23:24:25.875417 kernel: loop7: detected capacity change from 0 to 211168 Sep 3 23:24:25.910785 (sd-merge)[1598]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 3 23:24:25.916007 (sd-merge)[1598]: Merged extensions into '/usr'. Sep 3 23:24:25.935815 systemd[1]: Reload requested from client PID 1551 ('systemd-sysext') (unit systemd-sysext.service)... Sep 3 23:24:25.935850 systemd[1]: Reloading... Sep 3 23:24:26.141776 zram_generator::config[1627]: No configuration found. Sep 3 23:24:26.222925 ldconfig[1543]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 3 23:24:26.416969 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:24:26.610416 systemd[1]: Reloading finished in 673 ms. Sep 3 23:24:26.636491 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 3 23:24:26.639776 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 3 23:24:26.643712 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 3 23:24:26.662815 systemd[1]: Starting ensure-sysext.service... Sep 3 23:24:26.667938 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:24:26.674860 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:24:26.699564 systemd[1]: Reload requested from client PID 1677 ('systemctl') (unit ensure-sysext.service)... Sep 3 23:24:26.701764 systemd[1]: Reloading... Sep 3 23:24:26.755555 systemd-udevd[1679]: Using default interface naming scheme 'v255'. Sep 3 23:24:26.775294 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 3 23:24:26.775369 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 3 23:24:26.775998 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 3 23:24:26.776530 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 3 23:24:26.780405 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 3 23:24:26.784114 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Sep 3 23:24:26.784269 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Sep 3 23:24:26.799986 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:24:26.800015 systemd-tmpfiles[1678]: Skipping /boot Sep 3 23:24:26.884342 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:24:26.884373 systemd-tmpfiles[1678]: Skipping /boot Sep 3 23:24:27.000982 zram_generator::config[1742]: No configuration found. Sep 3 23:24:27.239834 (udev-worker)[1717]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:24:27.363177 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:24:27.619702 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 3 23:24:27.620624 systemd[1]: Reloading finished in 918 ms. Sep 3 23:24:27.649447 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:24:27.653722 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:24:27.762041 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:24:27.771110 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 3 23:24:27.774028 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:24:27.778221 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:24:27.784383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:24:27.816945 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:24:27.819477 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:24:27.819757 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:24:27.824048 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 3 23:24:27.834797 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:24:27.854234 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:24:27.862046 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 3 23:24:27.870410 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:24:27.873259 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:24:27.884055 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:24:27.885825 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:24:27.909139 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:24:27.911755 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:24:27.920855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:24:27.928807 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:24:27.931512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:24:27.931786 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:24:27.931984 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:24:27.945188 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:24:27.951548 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:24:27.959429 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:24:27.965395 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:24:27.967924 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:24:27.968145 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:24:27.968501 systemd[1]: Reached target time-set.target - System Time Set. Sep 3 23:24:27.985350 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 3 23:24:28.000024 systemd[1]: Finished ensure-sysext.service. Sep 3 23:24:28.011745 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:24:28.014721 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:24:28.025034 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:24:28.027507 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:24:28.041153 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:24:28.048599 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 3 23:24:28.052409 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 3 23:24:28.078628 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 3 23:24:28.094553 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 3 23:24:28.103508 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 3 23:24:28.120292 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:24:28.121800 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:24:28.126610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:24:28.127009 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:24:28.131057 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:24:28.131207 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:24:28.159168 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 3 23:24:28.174269 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 3 23:24:28.188103 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 3 23:24:28.239343 augenrules[1939]: No rules Sep 3 23:24:28.241951 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:24:28.245580 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:24:28.255590 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 3 23:24:28.333791 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 3 23:24:28.338119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:24:28.459275 systemd-networkd[1894]: lo: Link UP Sep 3 23:24:28.459868 systemd-networkd[1894]: lo: Gained carrier Sep 3 23:24:28.462881 systemd-networkd[1894]: Enumeration completed Sep 3 23:24:28.463338 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:24:28.469071 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 3 23:24:28.471939 systemd-networkd[1894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:24:28.472070 systemd-networkd[1894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:24:28.475507 systemd-networkd[1894]: eth0: Link UP Sep 3 23:24:28.476032 systemd-networkd[1894]: eth0: Gained carrier Sep 3 23:24:28.476079 systemd-networkd[1894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:24:28.477048 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 3 23:24:28.486843 systemd-networkd[1894]: eth0: DHCPv4 address 172.31.22.232/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 3 23:24:28.486958 systemd-resolved[1895]: Positive Trust Anchors: Sep 3 23:24:28.486981 systemd-resolved[1895]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:24:28.487045 systemd-resolved[1895]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:24:28.502157 systemd-resolved[1895]: Defaulting to hostname 'linux'. Sep 3 23:24:28.505664 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:24:28.508305 systemd[1]: Reached target network.target - Network. Sep 3 23:24:28.510286 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:24:28.512922 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:24:28.515368 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 3 23:24:28.518091 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 3 23:24:28.521077 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 3 23:24:28.523685 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 3 23:24:28.526478 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 3 23:24:28.529716 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 3 23:24:28.529775 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:24:28.533080 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:24:28.538759 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 3 23:24:28.547827 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 3 23:24:28.556218 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 3 23:24:28.559387 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 3 23:24:28.562211 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 3 23:24:28.577807 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 3 23:24:28.580963 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 3 23:24:28.585267 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 3 23:24:28.588971 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 3 23:24:28.592268 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:24:28.594939 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:24:28.597296 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:24:28.597379 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:24:28.599748 systemd[1]: Starting containerd.service - containerd container runtime... Sep 3 23:24:28.604472 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 3 23:24:28.613127 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 3 23:24:28.623113 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 3 23:24:28.630141 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 3 23:24:28.637913 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 3 23:24:28.641782 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 3 23:24:28.645694 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 3 23:24:28.674178 systemd[1]: Started ntpd.service - Network Time Service. Sep 3 23:24:28.686003 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 3 23:24:28.691597 jq[1966]: false Sep 3 23:24:28.693440 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 3 23:24:28.704110 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 3 23:24:28.714939 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 3 23:24:28.728225 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 3 23:24:28.732250 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 3 23:24:28.745116 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 3 23:24:28.748728 systemd[1]: Starting update-engine.service - Update Engine... Sep 3 23:24:28.757525 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 3 23:24:28.772460 extend-filesystems[1967]: Found /dev/nvme0n1p6 Sep 3 23:24:28.784726 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 3 23:24:28.789276 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 3 23:24:28.797470 extend-filesystems[1967]: Found /dev/nvme0n1p9 Sep 3 23:24:28.789770 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 3 23:24:28.816028 extend-filesystems[1967]: Checking size of /dev/nvme0n1p9 Sep 3 23:24:28.818766 systemd[1]: motdgen.service: Deactivated successfully. Sep 3 23:24:28.820588 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 3 23:24:28.829123 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 3 23:24:28.834220 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 3 23:24:28.879559 jq[1987]: true Sep 3 23:24:28.899801 extend-filesystems[1967]: Resized partition /dev/nvme0n1p9 Sep 3 23:24:28.929790 ntpd[1969]: ntpd 4.2.8p17@1.4004-o Wed Sep 3 21:32:01 UTC 2025 (1): Starting Sep 3 23:24:28.934239 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: ntpd 4.2.8p17@1.4004-o Wed Sep 3 21:32:01 UTC 2025 (1): Starting Sep 3 23:24:28.934239 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 3 23:24:28.934239 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: ---------------------------------------------------- Sep 3 23:24:28.934239 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: ntp-4 is maintained by Network Time Foundation, Sep 3 23:24:28.934239 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 3 23:24:28.934239 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: corporation. Support and training for ntp-4 are Sep 3 23:24:28.934239 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: available at https://www.nwtime.org/support Sep 3 23:24:28.934239 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: ---------------------------------------------------- Sep 3 23:24:28.938928 extend-filesystems[2017]: resize2fs 1.47.2 (1-Jan-2025) Sep 3 23:24:28.953457 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 3 23:24:28.929851 ntpd[1969]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 3 23:24:28.958247 tar[1990]: linux-arm64/LICENSE Sep 3 23:24:28.958247 tar[1990]: linux-arm64/helm Sep 3 23:24:28.958673 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: proto: precision = 0.096 usec (-23) Sep 3 23:24:28.958673 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: basedate set to 2025-08-22 Sep 3 23:24:28.958673 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: gps base set to 2025-08-24 (week 2381) Sep 3 23:24:28.958673 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: Listen and drop on 0 v6wildcard [::]:123 Sep 3 23:24:28.958673 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 3 23:24:28.958673 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: Listen normally on 2 lo 127.0.0.1:123 Sep 3 23:24:28.958673 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: Listen normally on 3 eth0 172.31.22.232:123 Sep 3 23:24:28.958673 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: Listen normally on 4 lo [::1]:123 Sep 3 23:24:28.958673 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: bind(21) AF_INET6 fe80::453:ccff:fece:25d%2#123 flags 0x11 failed: Cannot assign requested address Sep 3 23:24:28.958673 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: unable to create socket on eth0 (5) for fe80::453:ccff:fece:25d%2#123 Sep 3 23:24:28.958673 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: failed to init interface for address fe80::453:ccff:fece:25d%2 Sep 3 23:24:28.958673 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: Listening on routing socket on fd #21 for interface updates Sep 3 23:24:28.929869 ntpd[1969]: ---------------------------------------------------- Sep 3 23:24:28.929886 ntpd[1969]: ntp-4 is maintained by Network Time Foundation, Sep 3 23:24:28.929903 ntpd[1969]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 3 23:24:28.929919 ntpd[1969]: corporation. Support and training for ntp-4 are Sep 3 23:24:28.929936 ntpd[1969]: available at https://www.nwtime.org/support Sep 3 23:24:28.929955 ntpd[1969]: ---------------------------------------------------- Sep 3 23:24:28.939474 ntpd[1969]: proto: precision = 0.096 usec (-23) Sep 3 23:24:28.944961 ntpd[1969]: basedate set to 2025-08-22 Sep 3 23:24:28.944992 ntpd[1969]: gps base set to 2025-08-24 (week 2381) Sep 3 23:24:29.049010 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 3 23:24:29.049010 ntpd[1969]: 3 Sep 23:24:28 ntpd[1969]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 3 23:24:28.951040 ntpd[1969]: Listen and drop on 0 v6wildcard [::]:123 Sep 3 23:24:28.985604 (ntainerd)[2014]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 3 23:24:29.049870 update_engine[1983]: I20250903 23:24:28.977100 1983 main.cc:92] Flatcar Update Engine starting Sep 3 23:24:29.049870 update_engine[1983]: I20250903 23:24:29.022909 1983 update_check_scheduler.cc:74] Next update check in 2m53s Sep 3 23:24:28.951231 ntpd[1969]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 3 23:24:28.987729 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 3 23:24:29.050462 jq[2009]: true Sep 3 23:24:28.951478 ntpd[1969]: Listen normally on 2 lo 127.0.0.1:123 Sep 3 23:24:28.999143 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 3 23:24:28.951535 ntpd[1969]: Listen normally on 3 eth0 172.31.22.232:123 Sep 3 23:24:28.999192 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 3 23:24:28.951600 ntpd[1969]: Listen normally on 4 lo [::1]:123 Sep 3 23:24:29.002119 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 3 23:24:28.952841 ntpd[1969]: bind(21) AF_INET6 fe80::453:ccff:fece:25d%2#123 flags 0x11 failed: Cannot assign requested address Sep 3 23:24:29.002156 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 3 23:24:28.952889 ntpd[1969]: unable to create socket on eth0 (5) for fe80::453:ccff:fece:25d%2#123 Sep 3 23:24:29.052899 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 3 23:24:28.952944 ntpd[1969]: failed to init interface for address fe80::453:ccff:fece:25d%2 Sep 3 23:24:29.055397 systemd[1]: Started update-engine.service - Update Engine. Sep 3 23:24:28.953004 ntpd[1969]: Listening on routing socket on fd #21 for interface updates Sep 3 23:24:28.968789 ntpd[1969]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 3 23:24:28.968836 ntpd[1969]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 3 23:24:28.987402 dbus-daemon[1964]: [system] SELinux support is enabled Sep 3 23:24:29.081238 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 3 23:24:29.007058 dbus-daemon[1964]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1894 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 3 23:24:29.012683 dbus-daemon[1964]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 3 23:24:29.082594 extend-filesystems[2017]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 3 23:24:29.082594 extend-filesystems[2017]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 3 23:24:29.082594 extend-filesystems[2017]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 3 23:24:29.098842 extend-filesystems[1967]: Resized filesystem in /dev/nvme0n1p9 Sep 3 23:24:29.123882 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 3 23:24:29.127367 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 3 23:24:29.127834 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 3 23:24:29.168589 coreos-metadata[1963]: Sep 03 23:24:29.164 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 3 23:24:29.174826 coreos-metadata[1963]: Sep 03 23:24:29.174 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 3 23:24:29.182863 coreos-metadata[1963]: Sep 03 23:24:29.178 INFO Fetch successful Sep 3 23:24:29.182863 coreos-metadata[1963]: Sep 03 23:24:29.178 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 3 23:24:29.187469 coreos-metadata[1963]: Sep 03 23:24:29.187 INFO Fetch successful Sep 3 23:24:29.187593 coreos-metadata[1963]: Sep 03 23:24:29.187 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 3 23:24:29.188787 coreos-metadata[1963]: Sep 03 23:24:29.188 INFO Fetch successful Sep 3 23:24:29.188945 coreos-metadata[1963]: Sep 03 23:24:29.188 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 3 23:24:29.190424 coreos-metadata[1963]: Sep 03 23:24:29.190 INFO Fetch successful Sep 3 23:24:29.190424 coreos-metadata[1963]: Sep 03 23:24:29.190 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 3 23:24:29.195857 coreos-metadata[1963]: Sep 03 23:24:29.194 INFO Fetch failed with 404: resource not found Sep 3 23:24:29.195857 coreos-metadata[1963]: Sep 03 23:24:29.194 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 3 23:24:29.200479 coreos-metadata[1963]: Sep 03 23:24:29.197 INFO Fetch successful Sep 3 23:24:29.200479 coreos-metadata[1963]: Sep 03 23:24:29.197 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 3 23:24:29.207822 coreos-metadata[1963]: Sep 03 23:24:29.205 INFO Fetch successful Sep 3 23:24:29.207822 coreos-metadata[1963]: Sep 03 23:24:29.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 3 23:24:29.210699 coreos-metadata[1963]: Sep 03 23:24:29.209 INFO Fetch successful Sep 3 23:24:29.210699 coreos-metadata[1963]: Sep 03 23:24:29.209 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 3 23:24:29.211690 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 3 23:24:29.222458 coreos-metadata[1963]: Sep 03 23:24:29.221 INFO Fetch successful Sep 3 23:24:29.222458 coreos-metadata[1963]: Sep 03 23:24:29.221 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 3 23:24:29.226024 coreos-metadata[1963]: Sep 03 23:24:29.223 INFO Fetch successful Sep 3 23:24:29.266072 bash[2048]: Updated "/home/core/.ssh/authorized_keys" Sep 3 23:24:29.320234 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 3 23:24:29.335331 systemd[1]: Starting sshkeys.service... Sep 3 23:24:29.349033 systemd-logind[1979]: Watching system buttons on /dev/input/event0 (Power Button) Sep 3 23:24:29.351206 systemd-logind[1979]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 3 23:24:29.353986 systemd-logind[1979]: New seat seat0. Sep 3 23:24:29.365770 systemd[1]: Started systemd-logind.service - User Login Management. Sep 3 23:24:29.456325 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 3 23:24:29.470888 dbus-daemon[1964]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 3 23:24:29.476857 dbus-daemon[1964]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2022 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 3 23:24:29.485178 systemd[1]: Starting polkit.service - Authorization Manager... Sep 3 23:24:29.501955 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 3 23:24:29.505485 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 3 23:24:29.537372 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 3 23:24:29.544332 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 3 23:24:29.641820 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 3 23:24:29.823527 coreos-metadata[2092]: Sep 03 23:24:29.823 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 3 23:24:29.828312 coreos-metadata[2092]: Sep 03 23:24:29.827 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 3 23:24:29.833667 coreos-metadata[2092]: Sep 03 23:24:29.832 INFO Fetch successful Sep 3 23:24:29.833667 coreos-metadata[2092]: Sep 03 23:24:29.832 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 3 23:24:29.836237 coreos-metadata[2092]: Sep 03 23:24:29.836 INFO Fetch successful Sep 3 23:24:29.843466 unknown[2092]: wrote ssh authorized keys file for user: core Sep 3 23:24:29.917830 systemd-networkd[1894]: eth0: Gained IPv6LL Sep 3 23:24:29.935897 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 3 23:24:29.941061 systemd[1]: Reached target network-online.target - Network is Online. Sep 3 23:24:29.951909 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 3 23:24:29.961749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:29.975055 update-ssh-keys[2133]: Updated "/home/core/.ssh/authorized_keys" Sep 3 23:24:29.972498 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 3 23:24:29.976669 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 3 23:24:29.989422 systemd[1]: Finished sshkeys.service. Sep 3 23:24:30.041694 containerd[2014]: time="2025-09-03T23:24:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 3 23:24:30.059126 containerd[2014]: time="2025-09-03T23:24:30.059047761Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 3 23:24:30.136687 containerd[2014]: time="2025-09-03T23:24:30.135325473Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.824µs" Sep 3 23:24:30.136687 containerd[2014]: time="2025-09-03T23:24:30.135385833Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 3 23:24:30.136687 containerd[2014]: time="2025-09-03T23:24:30.135423909Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 3 23:24:30.144975 containerd[2014]: time="2025-09-03T23:24:30.144906501Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 3 23:24:30.144975 containerd[2014]: time="2025-09-03T23:24:30.144975429Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 3 23:24:30.145155 containerd[2014]: time="2025-09-03T23:24:30.145034625Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:24:30.145202 containerd[2014]: time="2025-09-03T23:24:30.145153473Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:24:30.145202 containerd[2014]: time="2025-09-03T23:24:30.145179909Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:24:30.145628 containerd[2014]: time="2025-09-03T23:24:30.145574373Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:24:30.145628 containerd[2014]: time="2025-09-03T23:24:30.145618785Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:24:30.145779 containerd[2014]: time="2025-09-03T23:24:30.145672257Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:24:30.145779 containerd[2014]: time="2025-09-03T23:24:30.145696197Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 3 23:24:30.145913 containerd[2014]: time="2025-09-03T23:24:30.145872489Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 3 23:24:30.146321 containerd[2014]: time="2025-09-03T23:24:30.146269341Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:24:30.146393 containerd[2014]: time="2025-09-03T23:24:30.146354229Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:24:30.146393 containerd[2014]: time="2025-09-03T23:24:30.146382897Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 3 23:24:30.151591 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 3 23:24:30.163059 containerd[2014]: time="2025-09-03T23:24:30.161281881Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 3 23:24:30.163193 containerd[2014]: time="2025-09-03T23:24:30.163151361Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 3 23:24:30.167254 containerd[2014]: time="2025-09-03T23:24:30.163326009Z" level=info msg="metadata content store policy set" policy=shared Sep 3 23:24:30.175280 containerd[2014]: time="2025-09-03T23:24:30.175147293Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 3 23:24:30.175280 containerd[2014]: time="2025-09-03T23:24:30.175265973Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 3 23:24:30.175424 containerd[2014]: time="2025-09-03T23:24:30.175379997Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 3 23:24:30.175471 containerd[2014]: time="2025-09-03T23:24:30.175415421Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 3 23:24:30.175471 containerd[2014]: time="2025-09-03T23:24:30.175445229Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 3 23:24:30.175571 containerd[2014]: time="2025-09-03T23:24:30.175477017Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 3 23:24:30.175571 containerd[2014]: time="2025-09-03T23:24:30.175505901Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 3 23:24:30.175571 containerd[2014]: time="2025-09-03T23:24:30.175535385Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 3 23:24:30.175720 containerd[2014]: time="2025-09-03T23:24:30.175575153Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 3 23:24:30.175720 containerd[2014]: time="2025-09-03T23:24:30.175604085Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 3 23:24:30.175720 containerd[2014]: time="2025-09-03T23:24:30.175629285Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 3 23:24:30.175720 containerd[2014]: time="2025-09-03T23:24:30.175686717Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 3 23:24:30.175954 containerd[2014]: time="2025-09-03T23:24:30.175910781Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 3 23:24:30.176020 containerd[2014]: time="2025-09-03T23:24:30.175963497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 3 23:24:30.176020 containerd[2014]: time="2025-09-03T23:24:30.176002821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 3 23:24:30.176125 containerd[2014]: time="2025-09-03T23:24:30.176031033Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 3 23:24:30.176125 containerd[2014]: time="2025-09-03T23:24:30.176059833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 3 23:24:30.176125 containerd[2014]: time="2025-09-03T23:24:30.176086737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 3 23:24:30.176125 containerd[2014]: time="2025-09-03T23:24:30.176118009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 3 23:24:30.176295 containerd[2014]: time="2025-09-03T23:24:30.176146473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 3 23:24:30.176295 containerd[2014]: time="2025-09-03T23:24:30.176174085Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 3 23:24:30.176295 containerd[2014]: time="2025-09-03T23:24:30.176200989Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 3 23:24:30.176295 containerd[2014]: time="2025-09-03T23:24:30.176234949Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 3 23:24:30.182421 containerd[2014]: time="2025-09-03T23:24:30.176625009Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 3 23:24:30.182537 containerd[2014]: time="2025-09-03T23:24:30.182451357Z" level=info msg="Start snapshots syncer" Sep 3 23:24:30.184897 containerd[2014]: time="2025-09-03T23:24:30.182632869Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 3 23:24:30.184897 containerd[2014]: time="2025-09-03T23:24:30.183060633Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 3 23:24:30.185262 containerd[2014]: time="2025-09-03T23:24:30.183141417Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 3 23:24:30.185262 containerd[2014]: time="2025-09-03T23:24:30.183273837Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 3 23:24:30.185262 containerd[2014]: time="2025-09-03T23:24:30.183506481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 3 23:24:30.185262 containerd[2014]: time="2025-09-03T23:24:30.183554025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 3 23:24:30.185262 containerd[2014]: time="2025-09-03T23:24:30.183582261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 3 23:24:30.185262 containerd[2014]: time="2025-09-03T23:24:30.183609789Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 3 23:24:30.185262 containerd[2014]: time="2025-09-03T23:24:30.183664293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 3 23:24:30.185262 containerd[2014]: time="2025-09-03T23:24:30.183695757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 3 23:24:30.185262 containerd[2014]: time="2025-09-03T23:24:30.183723285Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 3 23:24:30.185262 containerd[2014]: time="2025-09-03T23:24:30.183775053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 3 23:24:30.185262 containerd[2014]: time="2025-09-03T23:24:30.183804165Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 3 23:24:30.185262 containerd[2014]: time="2025-09-03T23:24:30.183831969Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 3 23:24:30.185262 containerd[2014]: time="2025-09-03T23:24:30.183889281Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:24:30.185262 containerd[2014]: time="2025-09-03T23:24:30.183927633Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:24:30.185889 containerd[2014]: time="2025-09-03T23:24:30.183950781Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:24:30.185889 containerd[2014]: time="2025-09-03T23:24:30.183975777Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:24:30.185889 containerd[2014]: time="2025-09-03T23:24:30.183996105Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 3 23:24:30.185889 containerd[2014]: time="2025-09-03T23:24:30.184019853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 3 23:24:30.185889 containerd[2014]: time="2025-09-03T23:24:30.184045809Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 3 23:24:30.185889 containerd[2014]: time="2025-09-03T23:24:30.184210509Z" level=info msg="runtime interface created" Sep 3 23:24:30.185889 containerd[2014]: time="2025-09-03T23:24:30.184227849Z" level=info msg="created NRI interface" Sep 3 23:24:30.185889 containerd[2014]: time="2025-09-03T23:24:30.184251465Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 3 23:24:30.185889 containerd[2014]: time="2025-09-03T23:24:30.184277145Z" level=info msg="Connect containerd service" Sep 3 23:24:30.185889 containerd[2014]: time="2025-09-03T23:24:30.184340373Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 3 23:24:30.210962 containerd[2014]: time="2025-09-03T23:24:30.210014506Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:24:30.384572 locksmithd[2025]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 3 23:24:30.433133 polkitd[2086]: Started polkitd version 126 Sep 3 23:24:30.451676 amazon-ssm-agent[2143]: Initializing new seelog logger Sep 3 23:24:30.457676 amazon-ssm-agent[2143]: New Seelog Logger Creation Complete Sep 3 23:24:30.457676 amazon-ssm-agent[2143]: 2025/09/03 23:24:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:24:30.457676 amazon-ssm-agent[2143]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:24:30.460241 amazon-ssm-agent[2143]: 2025/09/03 23:24:30 processing appconfig overrides Sep 3 23:24:30.468675 amazon-ssm-agent[2143]: 2025/09/03 23:24:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:24:30.468675 amazon-ssm-agent[2143]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:24:30.468675 amazon-ssm-agent[2143]: 2025/09/03 23:24:30 processing appconfig overrides Sep 3 23:24:30.468675 amazon-ssm-agent[2143]: 2025/09/03 23:24:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:24:30.468675 amazon-ssm-agent[2143]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:24:30.468675 amazon-ssm-agent[2143]: 2025/09/03 23:24:30 processing appconfig overrides Sep 3 23:24:30.468675 amazon-ssm-agent[2143]: 2025-09-03 23:24:30.4669 INFO Proxy environment variables: Sep 3 23:24:30.489114 amazon-ssm-agent[2143]: 2025/09/03 23:24:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:24:30.489114 amazon-ssm-agent[2143]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:24:30.489114 amazon-ssm-agent[2143]: 2025/09/03 23:24:30 processing appconfig overrides Sep 3 23:24:30.498237 polkitd[2086]: Loading rules from directory /etc/polkit-1/rules.d Sep 3 23:24:30.498848 polkitd[2086]: Loading rules from directory /run/polkit-1/rules.d Sep 3 23:24:30.498924 polkitd[2086]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 3 23:24:30.499526 polkitd[2086]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 3 23:24:30.499574 polkitd[2086]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 3 23:24:30.499685 polkitd[2086]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 3 23:24:30.502397 polkitd[2086]: Finished loading, compiling and executing 2 rules Sep 3 23:24:30.512315 systemd[1]: Started polkit.service - Authorization Manager. Sep 3 23:24:30.524297 dbus-daemon[1964]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 3 23:24:30.526786 polkitd[2086]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 3 23:24:30.582293 amazon-ssm-agent[2143]: 2025-09-03 23:24:30.4671 INFO https_proxy: Sep 3 23:24:30.613886 systemd-resolved[1895]: System hostname changed to 'ip-172-31-22-232'. Sep 3 23:24:30.614003 systemd-hostnamed[2022]: Hostname set to (transient) Sep 3 23:24:30.685721 amazon-ssm-agent[2143]: 2025-09-03 23:24:30.4671 INFO http_proxy: Sep 3 23:24:30.703202 containerd[2014]: time="2025-09-03T23:24:30.702981468Z" level=info msg="Start subscribing containerd event" Sep 3 23:24:30.703351 containerd[2014]: time="2025-09-03T23:24:30.703324296Z" level=info msg="Start recovering state" Sep 3 23:24:30.703790 containerd[2014]: time="2025-09-03T23:24:30.703739208Z" level=info msg="Start event monitor" Sep 3 23:24:30.703865 containerd[2014]: time="2025-09-03T23:24:30.703797132Z" level=info msg="Start cni network conf syncer for default" Sep 3 23:24:30.703865 containerd[2014]: time="2025-09-03T23:24:30.703818072Z" level=info msg="Start streaming server" Sep 3 23:24:30.703865 containerd[2014]: time="2025-09-03T23:24:30.703839264Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 3 23:24:30.703865 containerd[2014]: time="2025-09-03T23:24:30.703855632Z" level=info msg="runtime interface starting up..." Sep 3 23:24:30.704020 containerd[2014]: time="2025-09-03T23:24:30.703870152Z" level=info msg="starting plugins..." Sep 3 23:24:30.704020 containerd[2014]: time="2025-09-03T23:24:30.703899228Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 3 23:24:30.705099 containerd[2014]: time="2025-09-03T23:24:30.705041904Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 3 23:24:30.705183 containerd[2014]: time="2025-09-03T23:24:30.705161028Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 3 23:24:30.708303 systemd[1]: Started containerd.service - containerd container runtime. Sep 3 23:24:30.712070 containerd[2014]: time="2025-09-03T23:24:30.711267288Z" level=info msg="containerd successfully booted in 0.670746s" Sep 3 23:24:30.724615 sshd_keygen[2011]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 3 23:24:30.786673 amazon-ssm-agent[2143]: 2025-09-03 23:24:30.4671 INFO no_proxy: Sep 3 23:24:30.836346 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 3 23:24:30.843691 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 3 23:24:30.851242 systemd[1]: Started sshd@0-172.31.22.232:22-139.178.89.65:55090.service - OpenSSH per-connection server daemon (139.178.89.65:55090). Sep 3 23:24:30.884084 amazon-ssm-agent[2143]: 2025-09-03 23:24:30.4674 INFO Checking if agent identity type OnPrem can be assumed Sep 3 23:24:30.907373 systemd[1]: issuegen.service: Deactivated successfully. Sep 3 23:24:30.908789 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 3 23:24:30.921075 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 3 23:24:30.986693 amazon-ssm-agent[2143]: 2025-09-03 23:24:30.4674 INFO Checking if agent identity type EC2 can be assumed Sep 3 23:24:30.999309 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 3 23:24:31.007438 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 3 23:24:31.019111 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 3 23:24:31.022324 systemd[1]: Reached target getty.target - Login Prompts. Sep 3 23:24:31.081958 amazon-ssm-agent[2143]: 2025-09-03 23:24:30.6928 INFO Agent will take identity from EC2 Sep 3 23:24:31.180948 sshd[2215]: Accepted publickey for core from 139.178.89.65 port 55090 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:24:31.182708 amazon-ssm-agent[2143]: 2025-09-03 23:24:30.6977 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 3 23:24:31.187900 sshd-session[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:31.212396 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 3 23:24:31.219214 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 3 23:24:31.253766 systemd-logind[1979]: New session 1 of user core. Sep 3 23:24:31.281163 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 3 23:24:31.293771 amazon-ssm-agent[2143]: 2025-09-03 23:24:30.6977 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 3 23:24:31.299097 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 3 23:24:31.324182 (systemd)[2228]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 3 23:24:31.336557 systemd-logind[1979]: New session c1 of user core. Sep 3 23:24:31.393137 amazon-ssm-agent[2143]: 2025-09-03 23:24:30.6977 INFO [amazon-ssm-agent] Starting Core Agent Sep 3 23:24:31.435931 tar[1990]: linux-arm64/README.md Sep 3 23:24:31.474393 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 3 23:24:31.493763 amazon-ssm-agent[2143]: 2025-09-03 23:24:30.6977 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 3 23:24:31.578053 amazon-ssm-agent[2143]: 2025/09/03 23:24:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:24:31.578053 amazon-ssm-agent[2143]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:24:31.578194 amazon-ssm-agent[2143]: 2025/09/03 23:24:31 processing appconfig overrides Sep 3 23:24:31.594741 amazon-ssm-agent[2143]: 2025-09-03 23:24:30.6977 INFO [Registrar] Starting registrar module Sep 3 23:24:31.614732 amazon-ssm-agent[2143]: 2025-09-03 23:24:30.7021 INFO [EC2Identity] Checking disk for registration info Sep 3 23:24:31.616938 amazon-ssm-agent[2143]: 2025-09-03 23:24:30.7022 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 3 23:24:31.616938 amazon-ssm-agent[2143]: 2025-09-03 23:24:30.7022 INFO [EC2Identity] Generating registration keypair Sep 3 23:24:31.616938 amazon-ssm-agent[2143]: 2025-09-03 23:24:31.5214 INFO [EC2Identity] Checking write access before registering Sep 3 23:24:31.616938 amazon-ssm-agent[2143]: 2025-09-03 23:24:31.5221 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 3 23:24:31.616938 amazon-ssm-agent[2143]: 2025-09-03 23:24:31.5776 INFO [EC2Identity] EC2 registration was successful. Sep 3 23:24:31.616938 amazon-ssm-agent[2143]: 2025-09-03 23:24:31.5777 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 3 23:24:31.616938 amazon-ssm-agent[2143]: 2025-09-03 23:24:31.5778 INFO [CredentialRefresher] credentialRefresher has started Sep 3 23:24:31.616938 amazon-ssm-agent[2143]: 2025-09-03 23:24:31.5778 INFO [CredentialRefresher] Starting credentials refresher loop Sep 3 23:24:31.616938 amazon-ssm-agent[2143]: 2025-09-03 23:24:31.6143 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 3 23:24:31.616938 amazon-ssm-agent[2143]: 2025-09-03 23:24:31.6146 INFO [CredentialRefresher] Credentials ready Sep 3 23:24:31.693722 amazon-ssm-agent[2143]: 2025-09-03 23:24:31.6168 INFO [CredentialRefresher] Next credential rotation will be in 29.9999591497 minutes Sep 3 23:24:31.721082 systemd[2228]: Queued start job for default target default.target. Sep 3 23:24:31.734001 systemd[2228]: Created slice app.slice - User Application Slice. Sep 3 23:24:31.734057 systemd[2228]: Reached target paths.target - Paths. Sep 3 23:24:31.734146 systemd[2228]: Reached target timers.target - Timers. Sep 3 23:24:31.737291 systemd[2228]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 3 23:24:31.771332 systemd[2228]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 3 23:24:31.773046 systemd[2228]: Reached target sockets.target - Sockets. Sep 3 23:24:31.773347 systemd[2228]: Reached target basic.target - Basic System. Sep 3 23:24:31.773455 systemd[2228]: Reached target default.target - Main User Target. Sep 3 23:24:31.773520 systemd[2228]: Startup finished in 411ms. Sep 3 23:24:31.774183 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 3 23:24:31.788987 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 3 23:24:31.930963 ntpd[1969]: Listen normally on 6 eth0 [fe80::453:ccff:fece:25d%2]:123 Sep 3 23:24:31.931544 ntpd[1969]: 3 Sep 23:24:31 ntpd[1969]: Listen normally on 6 eth0 [fe80::453:ccff:fece:25d%2]:123 Sep 3 23:24:31.950102 systemd[1]: Started sshd@1-172.31.22.232:22-139.178.89.65:52356.service - OpenSSH per-connection server daemon (139.178.89.65:52356). Sep 3 23:24:32.149174 sshd[2243]: Accepted publickey for core from 139.178.89.65 port 52356 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:24:32.151708 sshd-session[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:32.165549 systemd-logind[1979]: New session 2 of user core. Sep 3 23:24:32.171927 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 3 23:24:32.181727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:32.186739 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 3 23:24:32.193179 systemd[1]: Startup finished in 3.739s (kernel) + 8.811s (initrd) + 8.765s (userspace) = 21.316s. Sep 3 23:24:32.206010 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:24:32.310704 sshd[2251]: Connection closed by 139.178.89.65 port 52356 Sep 3 23:24:32.311398 sshd-session[2243]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:32.319498 systemd[1]: sshd@1-172.31.22.232:22-139.178.89.65:52356.service: Deactivated successfully. Sep 3 23:24:32.325299 systemd[1]: session-2.scope: Deactivated successfully. Sep 3 23:24:32.329757 systemd-logind[1979]: Session 2 logged out. Waiting for processes to exit. Sep 3 23:24:32.349883 systemd[1]: Started sshd@2-172.31.22.232:22-139.178.89.65:52360.service - OpenSSH per-connection server daemon (139.178.89.65:52360). Sep 3 23:24:32.353554 systemd-logind[1979]: Removed session 2. Sep 3 23:24:32.539915 sshd[2261]: Accepted publickey for core from 139.178.89.65 port 52360 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:24:32.542910 sshd-session[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:32.556079 systemd-logind[1979]: New session 3 of user core. Sep 3 23:24:32.564920 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 3 23:24:32.669767 amazon-ssm-agent[2143]: 2025-09-03 23:24:32.6695 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 3 23:24:32.708856 sshd[2267]: Connection closed by 139.178.89.65 port 52360 Sep 3 23:24:32.709985 sshd-session[2261]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:32.719026 systemd[1]: sshd@2-172.31.22.232:22-139.178.89.65:52360.service: Deactivated successfully. Sep 3 23:24:32.725755 systemd[1]: session-3.scope: Deactivated successfully. Sep 3 23:24:32.734075 systemd-logind[1979]: Session 3 logged out. Waiting for processes to exit. Sep 3 23:24:32.756306 systemd[1]: Started sshd@3-172.31.22.232:22-139.178.89.65:52364.service - OpenSSH per-connection server daemon (139.178.89.65:52364). Sep 3 23:24:32.757564 systemd-logind[1979]: Removed session 3. Sep 3 23:24:32.770823 amazon-ssm-agent[2143]: 2025-09-03 23:24:32.6737 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2270) started Sep 3 23:24:32.871964 amazon-ssm-agent[2143]: 2025-09-03 23:24:32.6738 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 3 23:24:33.007421 sshd[2280]: Accepted publickey for core from 139.178.89.65 port 52364 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:24:33.010368 sshd-session[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:33.019230 systemd-logind[1979]: New session 4 of user core. Sep 3 23:24:33.026933 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 3 23:24:33.158363 sshd[2290]: Connection closed by 139.178.89.65 port 52364 Sep 3 23:24:33.159124 sshd-session[2280]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:33.166459 systemd[1]: sshd@3-172.31.22.232:22-139.178.89.65:52364.service: Deactivated successfully. Sep 3 23:24:33.173507 systemd[1]: session-4.scope: Deactivated successfully. Sep 3 23:24:33.176789 systemd-logind[1979]: Session 4 logged out. Waiting for processes to exit. Sep 3 23:24:33.180957 systemd-logind[1979]: Removed session 4. Sep 3 23:24:33.195128 systemd[1]: Started sshd@4-172.31.22.232:22-139.178.89.65:52370.service - OpenSSH per-connection server daemon (139.178.89.65:52370). Sep 3 23:24:33.270718 kubelet[2250]: E0903 23:24:33.270610 2250 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:24:33.275847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:24:33.276137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:24:33.276729 systemd[1]: kubelet.service: Consumed 1.490s CPU time, 260.1M memory peak. Sep 3 23:24:33.407068 sshd[2297]: Accepted publickey for core from 139.178.89.65 port 52370 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:24:33.409498 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:33.417873 systemd-logind[1979]: New session 5 of user core. Sep 3 23:24:33.429877 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 3 23:24:33.550367 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 3 23:24:33.551565 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:24:33.570156 sudo[2301]: pam_unix(sudo:session): session closed for user root Sep 3 23:24:33.593912 sshd[2300]: Connection closed by 139.178.89.65 port 52370 Sep 3 23:24:33.594942 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:33.602854 systemd[1]: sshd@4-172.31.22.232:22-139.178.89.65:52370.service: Deactivated successfully. Sep 3 23:24:33.606194 systemd[1]: session-5.scope: Deactivated successfully. Sep 3 23:24:33.607930 systemd-logind[1979]: Session 5 logged out. Waiting for processes to exit. Sep 3 23:24:33.610946 systemd-logind[1979]: Removed session 5. Sep 3 23:24:33.629236 systemd[1]: Started sshd@5-172.31.22.232:22-139.178.89.65:52374.service - OpenSSH per-connection server daemon (139.178.89.65:52374). Sep 3 23:24:33.832730 sshd[2307]: Accepted publickey for core from 139.178.89.65 port 52374 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:24:33.834685 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:33.844259 systemd-logind[1979]: New session 6 of user core. Sep 3 23:24:33.850880 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 3 23:24:33.953855 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 3 23:24:33.954434 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:24:33.964178 sudo[2311]: pam_unix(sudo:session): session closed for user root Sep 3 23:24:33.973894 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 3 23:24:33.974947 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:24:33.991189 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:24:34.066896 augenrules[2333]: No rules Sep 3 23:24:34.069393 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:24:34.069929 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:24:34.071763 sudo[2310]: pam_unix(sudo:session): session closed for user root Sep 3 23:24:34.095884 sshd[2309]: Connection closed by 139.178.89.65 port 52374 Sep 3 23:24:34.094983 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:34.102357 systemd[1]: sshd@5-172.31.22.232:22-139.178.89.65:52374.service: Deactivated successfully. Sep 3 23:24:34.106335 systemd[1]: session-6.scope: Deactivated successfully. Sep 3 23:24:34.108168 systemd-logind[1979]: Session 6 logged out. Waiting for processes to exit. Sep 3 23:24:34.111521 systemd-logind[1979]: Removed session 6. Sep 3 23:24:34.132030 systemd[1]: Started sshd@6-172.31.22.232:22-139.178.89.65:52376.service - OpenSSH per-connection server daemon (139.178.89.65:52376). Sep 3 23:24:34.331244 sshd[2343]: Accepted publickey for core from 139.178.89.65 port 52376 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:24:34.333765 sshd-session[2343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:34.341586 systemd-logind[1979]: New session 7 of user core. Sep 3 23:24:34.350867 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 3 23:24:34.454432 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 3 23:24:34.455587 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:24:35.046519 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 3 23:24:35.061151 (dockerd)[2363]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 3 23:24:35.475761 dockerd[2363]: time="2025-09-03T23:24:35.474033460Z" level=info msg="Starting up" Sep 3 23:24:35.477075 dockerd[2363]: time="2025-09-03T23:24:35.477013444Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 3 23:24:35.549472 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2346697818-merged.mount: Deactivated successfully. Sep 3 23:24:35.631271 dockerd[2363]: time="2025-09-03T23:24:35.631191184Z" level=info msg="Loading containers: start." Sep 3 23:24:35.644718 kernel: Initializing XFRM netlink socket Sep 3 23:24:35.502404 systemd-resolved[1895]: Clock change detected. Flushing caches. Sep 3 23:24:35.513040 systemd-journald[1522]: Time jumped backwards, rotating. Sep 3 23:24:35.534183 (udev-worker)[2387]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:24:35.607936 systemd-networkd[1894]: docker0: Link UP Sep 3 23:24:35.615445 dockerd[2363]: time="2025-09-03T23:24:35.615277156Z" level=info msg="Loading containers: done." Sep 3 23:24:35.639352 dockerd[2363]: time="2025-09-03T23:24:35.639258148Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 3 23:24:35.639481 dockerd[2363]: time="2025-09-03T23:24:35.639399652Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 3 23:24:35.639608 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3258630284-merged.mount: Deactivated successfully. Sep 3 23:24:35.639749 dockerd[2363]: time="2025-09-03T23:24:35.639619060Z" level=info msg="Initializing buildkit" Sep 3 23:24:35.678865 dockerd[2363]: time="2025-09-03T23:24:35.678359512Z" level=info msg="Completed buildkit initialization" Sep 3 23:24:35.694559 dockerd[2363]: time="2025-09-03T23:24:35.694496440Z" level=info msg="Daemon has completed initialization" Sep 3 23:24:35.694938 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 3 23:24:35.695617 dockerd[2363]: time="2025-09-03T23:24:35.694743220Z" level=info msg="API listen on /run/docker.sock" Sep 3 23:24:36.779157 containerd[2014]: time="2025-09-03T23:24:36.779098182Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 3 23:24:37.340714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2250126005.mount: Deactivated successfully. Sep 3 23:24:38.794821 containerd[2014]: time="2025-09-03T23:24:38.793968728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:38.797203 containerd[2014]: time="2025-09-03T23:24:38.797157548Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352613" Sep 3 23:24:38.799712 containerd[2014]: time="2025-09-03T23:24:38.799669724Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:38.805287 containerd[2014]: time="2025-09-03T23:24:38.805204820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:38.807581 containerd[2014]: time="2025-09-03T23:24:38.807479528Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 2.028311122s" Sep 3 23:24:38.807581 containerd[2014]: time="2025-09-03T23:24:38.807571988Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 3 23:24:38.809963 containerd[2014]: time="2025-09-03T23:24:38.809768048Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 3 23:24:40.216741 containerd[2014]: time="2025-09-03T23:24:40.216663331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:40.219289 containerd[2014]: time="2025-09-03T23:24:40.218806135Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536977" Sep 3 23:24:40.220317 containerd[2014]: time="2025-09-03T23:24:40.220260403Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:40.224807 containerd[2014]: time="2025-09-03T23:24:40.224741695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:40.226774 containerd[2014]: time="2025-09-03T23:24:40.226717039Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.416581887s" Sep 3 23:24:40.226901 containerd[2014]: time="2025-09-03T23:24:40.226771963Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 3 23:24:40.227469 containerd[2014]: time="2025-09-03T23:24:40.227417059Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 3 23:24:41.448820 containerd[2014]: time="2025-09-03T23:24:41.448624605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:41.451635 containerd[2014]: time="2025-09-03T23:24:41.451551717Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292014" Sep 3 23:24:41.453120 containerd[2014]: time="2025-09-03T23:24:41.453069813Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:41.457359 containerd[2014]: time="2025-09-03T23:24:41.457299297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:41.460296 containerd[2014]: time="2025-09-03T23:24:41.460223133Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.23275031s" Sep 3 23:24:41.460476 containerd[2014]: time="2025-09-03T23:24:41.460274121Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 3 23:24:41.461532 containerd[2014]: time="2025-09-03T23:24:41.461275653Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 3 23:24:42.733337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3859080759.mount: Deactivated successfully. Sep 3 23:24:43.097990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 3 23:24:43.103134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:43.483440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:43.498327 (kubelet)[2647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:24:43.594664 containerd[2014]: time="2025-09-03T23:24:43.594152748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:43.595770 kubelet[2647]: E0903 23:24:43.595696 2647 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:24:43.598600 containerd[2014]: time="2025-09-03T23:24:43.598488576Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199959" Sep 3 23:24:43.600849 containerd[2014]: time="2025-09-03T23:24:43.599826384Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:43.606654 containerd[2014]: time="2025-09-03T23:24:43.606599220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:43.607116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:24:43.608153 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:24:43.609130 containerd[2014]: time="2025-09-03T23:24:43.608487972Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 2.147002223s" Sep 3 23:24:43.609130 containerd[2014]: time="2025-09-03T23:24:43.608534604Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 3 23:24:43.609611 containerd[2014]: time="2025-09-03T23:24:43.609571596Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 3 23:24:43.609855 systemd[1]: kubelet.service: Consumed 333ms CPU time, 104.7M memory peak. Sep 3 23:24:44.192015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1820634938.mount: Deactivated successfully. Sep 3 23:24:45.431822 containerd[2014]: time="2025-09-03T23:24:45.431667433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:45.435077 containerd[2014]: time="2025-09-03T23:24:45.435023461Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Sep 3 23:24:45.437259 containerd[2014]: time="2025-09-03T23:24:45.437188249Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:45.444030 containerd[2014]: time="2025-09-03T23:24:45.443924797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:45.445875 containerd[2014]: time="2025-09-03T23:24:45.445316245Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.835156757s" Sep 3 23:24:45.445875 containerd[2014]: time="2025-09-03T23:24:45.445372105Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 3 23:24:45.446111 containerd[2014]: time="2025-09-03T23:24:45.445918777Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 3 23:24:45.923742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3811551249.mount: Deactivated successfully. Sep 3 23:24:45.936177 containerd[2014]: time="2025-09-03T23:24:45.936098991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:24:45.938044 containerd[2014]: time="2025-09-03T23:24:45.937970043Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 3 23:24:45.940500 containerd[2014]: time="2025-09-03T23:24:45.940430823Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:24:45.945170 containerd[2014]: time="2025-09-03T23:24:45.945073131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:24:45.946875 containerd[2014]: time="2025-09-03T23:24:45.946411767Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 500.44853ms" Sep 3 23:24:45.946875 containerd[2014]: time="2025-09-03T23:24:45.946463427Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 3 23:24:45.947502 containerd[2014]: time="2025-09-03T23:24:45.947464875Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 3 23:24:46.482374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount966421614.mount: Deactivated successfully. Sep 3 23:24:48.522219 containerd[2014]: time="2025-09-03T23:24:48.522160600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:48.524590 containerd[2014]: time="2025-09-03T23:24:48.524546860Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465295" Sep 3 23:24:48.526699 containerd[2014]: time="2025-09-03T23:24:48.526658740Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:48.532272 containerd[2014]: time="2025-09-03T23:24:48.532207984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:48.534397 containerd[2014]: time="2025-09-03T23:24:48.534351796Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.586725653s" Sep 3 23:24:48.534542 containerd[2014]: time="2025-09-03T23:24:48.534501472Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 3 23:24:53.684053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 3 23:24:53.688498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:54.031053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:54.045377 (kubelet)[2795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:24:54.116808 kubelet[2795]: E0903 23:24:54.116131 2795 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:24:54.120960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:24:54.121263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:24:54.122273 systemd[1]: kubelet.service: Consumed 287ms CPU time, 107M memory peak. Sep 3 23:24:56.780327 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:56.780853 systemd[1]: kubelet.service: Consumed 287ms CPU time, 107M memory peak. Sep 3 23:24:56.784893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:56.835091 systemd[1]: Reload requested from client PID 2809 ('systemctl') (unit session-7.scope)... Sep 3 23:24:56.835126 systemd[1]: Reloading... Sep 3 23:24:57.079824 zram_generator::config[2857]: No configuration found. Sep 3 23:24:57.292646 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:24:57.554308 systemd[1]: Reloading finished in 718 ms. Sep 3 23:24:57.679990 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 3 23:24:57.680202 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 3 23:24:57.680854 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:57.680960 systemd[1]: kubelet.service: Consumed 231ms CPU time, 95.2M memory peak. Sep 3 23:24:57.686277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:58.017747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:58.032340 (kubelet)[2918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:24:58.106638 kubelet[2918]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:24:58.107110 kubelet[2918]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 3 23:24:58.107224 kubelet[2918]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:24:58.107426 kubelet[2918]: I0903 23:24:58.107383 2918 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:24:58.647850 kubelet[2918]: I0903 23:24:58.646756 2918 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 3 23:24:58.648023 kubelet[2918]: I0903 23:24:58.647998 2918 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:24:58.648614 kubelet[2918]: I0903 23:24:58.648588 2918 server.go:956] "Client rotation is on, will bootstrap in background" Sep 3 23:24:58.698912 kubelet[2918]: E0903 23:24:58.698857 2918 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.22.232:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.232:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 3 23:24:58.699822 kubelet[2918]: I0903 23:24:58.699559 2918 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:24:58.723412 kubelet[2918]: I0903 23:24:58.723369 2918 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:24:58.730381 kubelet[2918]: I0903 23:24:58.730345 2918 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:24:58.731925 kubelet[2918]: I0903 23:24:58.731143 2918 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:24:58.731925 kubelet[2918]: I0903 23:24:58.731186 2918 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-232","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:24:58.731925 kubelet[2918]: I0903 23:24:58.731579 2918 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:24:58.731925 kubelet[2918]: I0903 23:24:58.731600 2918 container_manager_linux.go:303] "Creating device plugin manager" Sep 3 23:24:58.732312 kubelet[2918]: I0903 23:24:58.732290 2918 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:24:58.738765 kubelet[2918]: I0903 23:24:58.738716 2918 kubelet.go:480] "Attempting to sync node with API server" Sep 3 23:24:58.739170 kubelet[2918]: I0903 23:24:58.739148 2918 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:24:58.739294 kubelet[2918]: I0903 23:24:58.739276 2918 kubelet.go:386] "Adding apiserver pod source" Sep 3 23:24:58.741708 kubelet[2918]: I0903 23:24:58.741681 2918 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:24:58.745230 kubelet[2918]: E0903 23:24:58.745177 2918 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.22.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-232&limit=500&resourceVersion=0\": dial tcp 172.31.22.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 3 23:24:58.746581 kubelet[2918]: I0903 23:24:58.746545 2918 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:24:58.748052 kubelet[2918]: I0903 23:24:58.748019 2918 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 3 23:24:58.748400 kubelet[2918]: W0903 23:24:58.748380 2918 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 3 23:24:58.755751 kubelet[2918]: I0903 23:24:58.754776 2918 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 3 23:24:58.755751 kubelet[2918]: I0903 23:24:58.754928 2918 server.go:1289] "Started kubelet" Sep 3 23:24:58.755751 kubelet[2918]: E0903 23:24:58.755027 2918 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.22.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 3 23:24:58.768084 kubelet[2918]: I0903 23:24:58.768045 2918 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:24:58.770044 kubelet[2918]: I0903 23:24:58.769975 2918 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:24:58.771988 kubelet[2918]: I0903 23:24:58.771934 2918 server.go:317] "Adding debug handlers to kubelet server" Sep 3 23:24:58.773999 kubelet[2918]: I0903 23:24:58.773909 2918 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:24:58.774337 kubelet[2918]: I0903 23:24:58.774293 2918 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:24:58.775583 kubelet[2918]: I0903 23:24:58.775526 2918 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:24:58.780952 kubelet[2918]: I0903 23:24:58.780904 2918 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 3 23:24:58.781304 kubelet[2918]: E0903 23:24:58.781269 2918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-232\" not found" Sep 3 23:24:58.781763 kubelet[2918]: I0903 23:24:58.781731 2918 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 3 23:24:58.781926 kubelet[2918]: I0903 23:24:58.781850 2918 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:24:58.792891 kubelet[2918]: E0903 23:24:58.783067 2918 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.232:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.232:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-232.1861e95926490863 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-232,UID:ip-172-31-22-232,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-232,},FirstTimestamp:2025-09-03 23:24:58.754885731 +0000 UTC m=+0.714968093,LastTimestamp:2025-09-03 23:24:58.754885731 +0000 UTC m=+0.714968093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-232,}" Sep 3 23:24:58.792891 kubelet[2918]: E0903 23:24:58.792835 2918 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.22.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 3 23:24:58.795890 kubelet[2918]: E0903 23:24:58.793408 2918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-232?timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" interval="200ms" Sep 3 23:24:58.797466 kubelet[2918]: I0903 23:24:58.796639 2918 factory.go:223] Registration of the systemd container factory successfully Sep 3 23:24:58.797466 kubelet[2918]: I0903 23:24:58.796833 2918 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:24:58.802256 kubelet[2918]: E0903 23:24:58.802068 2918 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 3 23:24:58.803519 kubelet[2918]: I0903 23:24:58.802226 2918 factory.go:223] Registration of the containerd container factory successfully Sep 3 23:24:58.851045 kubelet[2918]: I0903 23:24:58.850922 2918 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 3 23:24:58.854878 kubelet[2918]: I0903 23:24:58.854846 2918 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 3 23:24:58.855044 kubelet[2918]: I0903 23:24:58.855023 2918 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 3 23:24:58.855918 kubelet[2918]: I0903 23:24:58.855888 2918 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:24:58.856200 kubelet[2918]: I0903 23:24:58.855293 2918 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 3 23:24:58.856265 kubelet[2918]: I0903 23:24:58.856201 2918 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 3 23:24:58.856265 kubelet[2918]: I0903 23:24:58.856256 2918 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 3 23:24:58.856357 kubelet[2918]: I0903 23:24:58.856277 2918 kubelet.go:2436] "Starting kubelet main sync loop" Sep 3 23:24:58.856401 kubelet[2918]: E0903 23:24:58.856367 2918 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:24:58.861349 kubelet[2918]: E0903 23:24:58.860740 2918 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.22.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 3 23:24:58.861349 kubelet[2918]: I0903 23:24:58.860971 2918 policy_none.go:49] "None policy: Start" Sep 3 23:24:58.861349 kubelet[2918]: I0903 23:24:58.860997 2918 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 3 23:24:58.861349 kubelet[2918]: I0903 23:24:58.861020 2918 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:24:58.862355 kubelet[2918]: E0903 23:24:58.862160 2918 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.232:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.232:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-232.1861e95926490863 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-232,UID:ip-172-31-22-232,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-232,},FirstTimestamp:2025-09-03 23:24:58.754885731 +0000 UTC m=+0.714968093,LastTimestamp:2025-09-03 23:24:58.754885731 +0000 UTC m=+0.714968093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-232,}" Sep 3 23:24:58.879537 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 3 23:24:58.881922 kubelet[2918]: E0903 23:24:58.881889 2918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-232\" not found" Sep 3 23:24:58.907723 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 3 23:24:58.916773 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 3 23:24:58.937685 kubelet[2918]: E0903 23:24:58.937482 2918 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 3 23:24:58.939118 kubelet[2918]: I0903 23:24:58.939083 2918 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:24:58.939381 kubelet[2918]: I0903 23:24:58.939298 2918 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:24:58.940872 kubelet[2918]: I0903 23:24:58.940806 2918 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:24:58.942212 kubelet[2918]: E0903 23:24:58.942133 2918 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 3 23:24:58.942432 kubelet[2918]: E0903 23:24:58.942365 2918 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-232\" not found" Sep 3 23:24:58.981153 systemd[1]: Created slice kubepods-burstable-pod107efa80d34dd7d1df2a1446f59b4fad.slice - libcontainer container kubepods-burstable-pod107efa80d34dd7d1df2a1446f59b4fad.slice. Sep 3 23:24:58.984153 kubelet[2918]: I0903 23:24:58.984113 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/107efa80d34dd7d1df2a1446f59b4fad-ca-certs\") pod \"kube-apiserver-ip-172-31-22-232\" (UID: \"107efa80d34dd7d1df2a1446f59b4fad\") " pod="kube-system/kube-apiserver-ip-172-31-22-232" Sep 3 23:24:58.984456 kubelet[2918]: I0903 23:24:58.984376 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/107efa80d34dd7d1df2a1446f59b4fad-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-232\" (UID: \"107efa80d34dd7d1df2a1446f59b4fad\") " pod="kube-system/kube-apiserver-ip-172-31-22-232" Sep 3 23:24:58.984839 kubelet[2918]: I0903 23:24:58.984712 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/639cdba06055900fc73b7550f29c98ae-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"639cdba06055900fc73b7550f29c98ae\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232" Sep 3 23:24:58.985002 kubelet[2918]: I0903 23:24:58.984979 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/639cdba06055900fc73b7550f29c98ae-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"639cdba06055900fc73b7550f29c98ae\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232" Sep 3 23:24:58.985403 kubelet[2918]: I0903 23:24:58.985358 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/639cdba06055900fc73b7550f29c98ae-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"639cdba06055900fc73b7550f29c98ae\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232" Sep 3 23:24:58.985578 kubelet[2918]: I0903 23:24:58.985417 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/639cdba06055900fc73b7550f29c98ae-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"639cdba06055900fc73b7550f29c98ae\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232" Sep 3 23:24:58.985578 kubelet[2918]: I0903 23:24:58.985458 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/107efa80d34dd7d1df2a1446f59b4fad-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-232\" (UID: \"107efa80d34dd7d1df2a1446f59b4fad\") " pod="kube-system/kube-apiserver-ip-172-31-22-232" Sep 3 23:24:58.985578 kubelet[2918]: I0903 23:24:58.985494 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/639cdba06055900fc73b7550f29c98ae-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"639cdba06055900fc73b7550f29c98ae\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232" Sep 3 23:24:58.985578 kubelet[2918]: I0903 23:24:58.985533 2918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b1fcf5f573b8ac641900493fa8d193c-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-232\" (UID: \"5b1fcf5f573b8ac641900493fa8d193c\") " pod="kube-system/kube-scheduler-ip-172-31-22-232" Sep 3 23:24:58.996628 kubelet[2918]: E0903 23:24:58.996545 2918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-232?timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" interval="400ms" Sep 3 23:24:59.000357 kubelet[2918]: E0903 23:24:58.999980 2918 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-232\" not found" node="ip-172-31-22-232" Sep 3 23:24:59.007844 systemd[1]: Created slice kubepods-burstable-pod639cdba06055900fc73b7550f29c98ae.slice - libcontainer container kubepods-burstable-pod639cdba06055900fc73b7550f29c98ae.slice. Sep 3 23:24:59.021838 kubelet[2918]: E0903 23:24:59.021695 2918 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-232\" not found" node="ip-172-31-22-232" Sep 3 23:24:59.026072 systemd[1]: Created slice kubepods-burstable-pod5b1fcf5f573b8ac641900493fa8d193c.slice - libcontainer container kubepods-burstable-pod5b1fcf5f573b8ac641900493fa8d193c.slice. Sep 3 23:24:59.030825 kubelet[2918]: E0903 23:24:59.030451 2918 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-232\" not found" node="ip-172-31-22-232" Sep 3 23:24:59.043974 kubelet[2918]: I0903 23:24:59.043920 2918 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-232" Sep 3 23:24:59.044632 kubelet[2918]: E0903 23:24:59.044581 2918 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.232:6443/api/v1/nodes\": dial tcp 172.31.22.232:6443: connect: connection refused" node="ip-172-31-22-232" Sep 3 23:24:59.247626 kubelet[2918]: I0903 23:24:59.247566 2918 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-232" Sep 3 23:24:59.248360 kubelet[2918]: E0903 23:24:59.248105 2918 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.232:6443/api/v1/nodes\": dial tcp 172.31.22.232:6443: connect: connection refused" node="ip-172-31-22-232" Sep 3 23:24:59.302209 containerd[2014]: time="2025-09-03T23:24:59.302110790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-232,Uid:107efa80d34dd7d1df2a1446f59b4fad,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:59.324147 containerd[2014]: time="2025-09-03T23:24:59.323901314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-232,Uid:639cdba06055900fc73b7550f29c98ae,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:59.332487 containerd[2014]: time="2025-09-03T23:24:59.332439110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-232,Uid:5b1fcf5f573b8ac641900493fa8d193c,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:59.355979 containerd[2014]: time="2025-09-03T23:24:59.355909958Z" level=info msg="connecting to shim 3b22f92b7e627d0958cda83ce7895a2e748558246b6c31a26b0b9d035571fd82" address="unix:///run/containerd/s/63e6b3364da56c0ffa2b589b737d20ff904f19f9f6bbddbbd3db5a21a4153311" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:59.398664 kubelet[2918]: E0903 23:24:59.397567 2918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-232?timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" interval="800ms" Sep 3 23:24:59.436381 systemd[1]: Started cri-containerd-3b22f92b7e627d0958cda83ce7895a2e748558246b6c31a26b0b9d035571fd82.scope - libcontainer container 3b22f92b7e627d0958cda83ce7895a2e748558246b6c31a26b0b9d035571fd82. Sep 3 23:24:59.442835 containerd[2014]: time="2025-09-03T23:24:59.442755182Z" level=info msg="connecting to shim 7241c93c49c0ef94568a0659899c8329cf5ecfee16372be7f2c3dc1f0c1edf28" address="unix:///run/containerd/s/c40f5c0857c1d9166389563ec17ab47be8b71da4bacd71a7a373803347276ebf" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:59.451635 containerd[2014]: time="2025-09-03T23:24:59.451572050Z" level=info msg="connecting to shim ad8302d23811779cb9933b1b3b62f5f7b68d322cffcb7bd22f7020942f52ea9a" address="unix:///run/containerd/s/39b6d0581fddd6d2c7b1fed74e1c9efbdea3c0e0f335637f35516855423d8700" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:59.521412 systemd[1]: Started cri-containerd-ad8302d23811779cb9933b1b3b62f5f7b68d322cffcb7bd22f7020942f52ea9a.scope - libcontainer container ad8302d23811779cb9933b1b3b62f5f7b68d322cffcb7bd22f7020942f52ea9a. Sep 3 23:24:59.536672 systemd[1]: Started cri-containerd-7241c93c49c0ef94568a0659899c8329cf5ecfee16372be7f2c3dc1f0c1edf28.scope - libcontainer container 7241c93c49c0ef94568a0659899c8329cf5ecfee16372be7f2c3dc1f0c1edf28. Sep 3 23:24:59.592549 containerd[2014]: time="2025-09-03T23:24:59.592438431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-232,Uid:107efa80d34dd7d1df2a1446f59b4fad,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b22f92b7e627d0958cda83ce7895a2e748558246b6c31a26b0b9d035571fd82\"" Sep 3 23:24:59.609467 containerd[2014]: time="2025-09-03T23:24:59.609405267Z" level=info msg="CreateContainer within sandbox \"3b22f92b7e627d0958cda83ce7895a2e748558246b6c31a26b0b9d035571fd82\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 3 23:24:59.630067 containerd[2014]: time="2025-09-03T23:24:59.629999883Z" level=info msg="Container d7e15a1ee7607d761f7bfbc9814f649055e67d6f5c6253492097c60edbea71e6: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:59.655976 kubelet[2918]: I0903 23:24:59.655913 2918 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-232" Sep 3 23:24:59.656482 kubelet[2918]: E0903 23:24:59.656371 2918 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.232:6443/api/v1/nodes\": dial tcp 172.31.22.232:6443: connect: connection refused" node="ip-172-31-22-232" Sep 3 23:24:59.664018 containerd[2014]: time="2025-09-03T23:24:59.663890284Z" level=info msg="CreateContainer within sandbox \"3b22f92b7e627d0958cda83ce7895a2e748558246b6c31a26b0b9d035571fd82\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d7e15a1ee7607d761f7bfbc9814f649055e67d6f5c6253492097c60edbea71e6\"" Sep 3 23:24:59.665663 containerd[2014]: time="2025-09-03T23:24:59.665589016Z" level=info msg="StartContainer for \"d7e15a1ee7607d761f7bfbc9814f649055e67d6f5c6253492097c60edbea71e6\"" Sep 3 23:24:59.668245 containerd[2014]: time="2025-09-03T23:24:59.668110708Z" level=info msg="connecting to shim d7e15a1ee7607d761f7bfbc9814f649055e67d6f5c6253492097c60edbea71e6" address="unix:///run/containerd/s/63e6b3364da56c0ffa2b589b737d20ff904f19f9f6bbddbbd3db5a21a4153311" protocol=ttrpc version=3 Sep 3 23:24:59.693043 containerd[2014]: time="2025-09-03T23:24:59.692984284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-232,Uid:5b1fcf5f573b8ac641900493fa8d193c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad8302d23811779cb9933b1b3b62f5f7b68d322cffcb7bd22f7020942f52ea9a\"" Sep 3 23:24:59.701811 containerd[2014]: time="2025-09-03T23:24:59.701570020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-232,Uid:639cdba06055900fc73b7550f29c98ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"7241c93c49c0ef94568a0659899c8329cf5ecfee16372be7f2c3dc1f0c1edf28\"" Sep 3 23:24:59.702943 containerd[2014]: time="2025-09-03T23:24:59.702423148Z" level=info msg="CreateContainer within sandbox \"ad8302d23811779cb9933b1b3b62f5f7b68d322cffcb7bd22f7020942f52ea9a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 3 23:24:59.715250 containerd[2014]: time="2025-09-03T23:24:59.713748208Z" level=info msg="CreateContainer within sandbox \"7241c93c49c0ef94568a0659899c8329cf5ecfee16372be7f2c3dc1f0c1edf28\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 3 23:24:59.725240 systemd[1]: Started cri-containerd-d7e15a1ee7607d761f7bfbc9814f649055e67d6f5c6253492097c60edbea71e6.scope - libcontainer container d7e15a1ee7607d761f7bfbc9814f649055e67d6f5c6253492097c60edbea71e6. Sep 3 23:24:59.727247 containerd[2014]: time="2025-09-03T23:24:59.727173952Z" level=info msg="Container 92b052aa8faad824c369982c0ca8c6dfac0ef933f8ff9b2b1bfaa54bae065f78: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:59.741482 containerd[2014]: time="2025-09-03T23:24:59.741415936Z" level=info msg="Container b8f2ead548abeba8121c8814e75a2cf02b059fcc90d3bd480117b9005ae9e935: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:59.752036 containerd[2014]: time="2025-09-03T23:24:59.751965844Z" level=info msg="CreateContainer within sandbox \"ad8302d23811779cb9933b1b3b62f5f7b68d322cffcb7bd22f7020942f52ea9a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"92b052aa8faad824c369982c0ca8c6dfac0ef933f8ff9b2b1bfaa54bae065f78\"" Sep 3 23:24:59.753284 containerd[2014]: time="2025-09-03T23:24:59.753132196Z" level=info msg="StartContainer for \"92b052aa8faad824c369982c0ca8c6dfac0ef933f8ff9b2b1bfaa54bae065f78\"" Sep 3 23:24:59.757905 containerd[2014]: time="2025-09-03T23:24:59.757830256Z" level=info msg="connecting to shim 92b052aa8faad824c369982c0ca8c6dfac0ef933f8ff9b2b1bfaa54bae065f78" address="unix:///run/containerd/s/39b6d0581fddd6d2c7b1fed74e1c9efbdea3c0e0f335637f35516855423d8700" protocol=ttrpc version=3 Sep 3 23:24:59.770216 containerd[2014]: time="2025-09-03T23:24:59.770142400Z" level=info msg="CreateContainer within sandbox \"7241c93c49c0ef94568a0659899c8329cf5ecfee16372be7f2c3dc1f0c1edf28\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b8f2ead548abeba8121c8814e75a2cf02b059fcc90d3bd480117b9005ae9e935\"" Sep 3 23:24:59.775032 containerd[2014]: time="2025-09-03T23:24:59.772892164Z" level=info msg="StartContainer for \"b8f2ead548abeba8121c8814e75a2cf02b059fcc90d3bd480117b9005ae9e935\"" Sep 3 23:24:59.794027 containerd[2014]: time="2025-09-03T23:24:59.793971316Z" level=info msg="connecting to shim b8f2ead548abeba8121c8814e75a2cf02b059fcc90d3bd480117b9005ae9e935" address="unix:///run/containerd/s/c40f5c0857c1d9166389563ec17ab47be8b71da4bacd71a7a373803347276ebf" protocol=ttrpc version=3 Sep 3 23:24:59.802678 kubelet[2918]: E0903 23:24:59.802590 2918 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.22.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 3 23:24:59.824452 systemd[1]: Started cri-containerd-92b052aa8faad824c369982c0ca8c6dfac0ef933f8ff9b2b1bfaa54bae065f78.scope - libcontainer container 92b052aa8faad824c369982c0ca8c6dfac0ef933f8ff9b2b1bfaa54bae065f78. Sep 3 23:24:59.865573 systemd[1]: Started cri-containerd-b8f2ead548abeba8121c8814e75a2cf02b059fcc90d3bd480117b9005ae9e935.scope - libcontainer container b8f2ead548abeba8121c8814e75a2cf02b059fcc90d3bd480117b9005ae9e935. Sep 3 23:24:59.903525 containerd[2014]: time="2025-09-03T23:24:59.903378329Z" level=info msg="StartContainer for \"d7e15a1ee7607d761f7bfbc9814f649055e67d6f5c6253492097c60edbea71e6\" returns successfully" Sep 3 23:24:59.924911 kubelet[2918]: E0903 23:24:59.924838 2918 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.22.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 3 23:24:59.992380 containerd[2014]: time="2025-09-03T23:24:59.992318501Z" level=info msg="StartContainer for \"92b052aa8faad824c369982c0ca8c6dfac0ef933f8ff9b2b1bfaa54bae065f78\" returns successfully" Sep 3 23:25:00.003129 kubelet[2918]: E0903 23:25:00.003074 2918 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.22.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-232&limit=500&resourceVersion=0\": dial tcp 172.31.22.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 3 23:25:00.054233 containerd[2014]: time="2025-09-03T23:25:00.053966545Z" level=info msg="StartContainer for \"b8f2ead548abeba8121c8814e75a2cf02b059fcc90d3bd480117b9005ae9e935\" returns successfully" Sep 3 23:25:00.225561 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 3 23:25:00.460186 kubelet[2918]: I0903 23:25:00.460140 2918 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-232" Sep 3 23:25:00.926834 kubelet[2918]: E0903 23:25:00.925864 2918 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-232\" not found" node="ip-172-31-22-232" Sep 3 23:25:00.937341 kubelet[2918]: E0903 23:25:00.937289 2918 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-232\" not found" node="ip-172-31-22-232" Sep 3 23:25:00.946448 kubelet[2918]: E0903 23:25:00.946402 2918 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-232\" not found" node="ip-172-31-22-232" Sep 3 23:25:01.947716 kubelet[2918]: E0903 23:25:01.947661 2918 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-232\" not found" node="ip-172-31-22-232" Sep 3 23:25:01.950077 kubelet[2918]: E0903 23:25:01.950034 2918 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-232\" not found" node="ip-172-31-22-232" Sep 3 23:25:01.950624 kubelet[2918]: E0903 23:25:01.950582 2918 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-232\" not found" node="ip-172-31-22-232" Sep 3 23:25:02.949547 kubelet[2918]: E0903 23:25:02.949486 2918 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-232\" not found" node="ip-172-31-22-232" Sep 3 23:25:02.952408 kubelet[2918]: E0903 23:25:02.952341 2918 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-232\" not found" node="ip-172-31-22-232" Sep 3 23:25:03.747197 kubelet[2918]: I0903 23:25:03.747144 2918 apiserver.go:52] "Watching apiserver" Sep 3 23:25:03.868566 kubelet[2918]: E0903 23:25:03.868505 2918 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-232\" not found" node="ip-172-31-22-232" Sep 3 23:25:03.882203 kubelet[2918]: I0903 23:25:03.882148 2918 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 3 23:25:03.955032 kubelet[2918]: E0903 23:25:03.954987 2918 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-232\" not found" node="ip-172-31-22-232" Sep 3 23:25:04.061916 kubelet[2918]: I0903 23:25:04.060467 2918 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-232" Sep 3 23:25:04.061916 kubelet[2918]: E0903 23:25:04.060535 2918 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-22-232\": node \"ip-172-31-22-232\" not found" Sep 3 23:25:04.082807 kubelet[2918]: I0903 23:25:04.081905 2918 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-232" Sep 3 23:25:04.139437 kubelet[2918]: E0903 23:25:04.139379 2918 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-22-232\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-22-232" Sep 3 23:25:04.139437 kubelet[2918]: I0903 23:25:04.139429 2918 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-232" Sep 3 23:25:04.144384 kubelet[2918]: E0903 23:25:04.144331 2918 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-232\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-22-232" Sep 3 23:25:04.144384 kubelet[2918]: I0903 23:25:04.144378 2918 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-232" Sep 3 23:25:04.148547 kubelet[2918]: E0903 23:25:04.148488 2918 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-232\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-22-232" Sep 3 23:25:06.585832 systemd[1]: Reload requested from client PID 3205 ('systemctl') (unit session-7.scope)... Sep 3 23:25:06.586343 systemd[1]: Reloading... Sep 3 23:25:06.813863 zram_generator::config[3255]: No configuration found. Sep 3 23:25:07.009476 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:25:07.312408 systemd[1]: Reloading finished in 725 ms. Sep 3 23:25:07.359447 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:25:07.378564 systemd[1]: kubelet.service: Deactivated successfully. Sep 3 23:25:07.379061 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:25:07.379218 systemd[1]: kubelet.service: Consumed 1.431s CPU time, 128M memory peak. Sep 3 23:25:07.383750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:25:07.725886 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:25:07.740413 (kubelet)[3309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:25:07.837571 kubelet[3309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:25:07.837571 kubelet[3309]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 3 23:25:07.837571 kubelet[3309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:25:07.838285 kubelet[3309]: I0903 23:25:07.838219 3309 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:25:07.861154 kubelet[3309]: I0903 23:25:07.860754 3309 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 3 23:25:07.861154 kubelet[3309]: I0903 23:25:07.861134 3309 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:25:07.861767 kubelet[3309]: I0903 23:25:07.861719 3309 server.go:956] "Client rotation is on, will bootstrap in background" Sep 3 23:25:07.864113 kubelet[3309]: I0903 23:25:07.864060 3309 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 3 23:25:07.873648 kubelet[3309]: I0903 23:25:07.873585 3309 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:25:07.886852 kubelet[3309]: I0903 23:25:07.886810 3309 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:25:07.894374 kubelet[3309]: I0903 23:25:07.894306 3309 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:25:07.896846 kubelet[3309]: I0903 23:25:07.895316 3309 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:25:07.896846 kubelet[3309]: I0903 23:25:07.895385 3309 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-232","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:25:07.896846 kubelet[3309]: I0903 23:25:07.895873 3309 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:25:07.896846 kubelet[3309]: I0903 23:25:07.895903 3309 container_manager_linux.go:303] "Creating device plugin manager" Sep 3 23:25:07.896846 kubelet[3309]: I0903 23:25:07.895987 3309 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:25:07.897242 kubelet[3309]: I0903 23:25:07.896253 3309 kubelet.go:480] "Attempting to sync node with API server" Sep 3 23:25:07.897242 kubelet[3309]: I0903 23:25:07.896286 3309 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:25:07.897242 kubelet[3309]: I0903 23:25:07.896339 3309 kubelet.go:386] "Adding apiserver pod source" Sep 3 23:25:07.897242 kubelet[3309]: I0903 23:25:07.896368 3309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:25:07.906234 kubelet[3309]: I0903 23:25:07.906070 3309 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:25:07.908178 kubelet[3309]: I0903 23:25:07.908131 3309 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 3 23:25:07.934078 kubelet[3309]: I0903 23:25:07.934030 3309 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 3 23:25:07.934586 kubelet[3309]: I0903 23:25:07.934099 3309 server.go:1289] "Started kubelet" Sep 3 23:25:07.940697 kubelet[3309]: I0903 23:25:07.940627 3309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:25:07.946235 kubelet[3309]: I0903 23:25:07.946168 3309 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:25:07.951920 kubelet[3309]: I0903 23:25:07.947776 3309 server.go:317] "Adding debug handlers to kubelet server" Sep 3 23:25:07.965062 kubelet[3309]: I0903 23:25:07.964952 3309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:25:07.965441 kubelet[3309]: I0903 23:25:07.965400 3309 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:25:07.969973 kubelet[3309]: I0903 23:25:07.969920 3309 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 3 23:25:07.970367 kubelet[3309]: E0903 23:25:07.970316 3309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-232\" not found" Sep 3 23:25:07.970767 kubelet[3309]: I0903 23:25:07.970731 3309 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 3 23:25:07.973117 kubelet[3309]: I0903 23:25:07.971941 3309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:25:07.984185 kubelet[3309]: E0903 23:25:07.983375 3309 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 3 23:25:07.984185 kubelet[3309]: I0903 23:25:07.983688 3309 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:25:07.984185 kubelet[3309]: I0903 23:25:07.983976 3309 factory.go:223] Registration of the systemd container factory successfully Sep 3 23:25:07.984185 kubelet[3309]: I0903 23:25:07.984129 3309 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:25:07.992612 kubelet[3309]: I0903 23:25:07.992564 3309 factory.go:223] Registration of the containerd container factory successfully Sep 3 23:25:08.081044 kubelet[3309]: I0903 23:25:08.080802 3309 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 3 23:25:08.084127 kubelet[3309]: I0903 23:25:08.084073 3309 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 3 23:25:08.084127 kubelet[3309]: I0903 23:25:08.084119 3309 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 3 23:25:08.084320 kubelet[3309]: I0903 23:25:08.084149 3309 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 3 23:25:08.084320 kubelet[3309]: I0903 23:25:08.084166 3309 kubelet.go:2436] "Starting kubelet main sync loop" Sep 3 23:25:08.084320 kubelet[3309]: E0903 23:25:08.084231 3309 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:25:08.180947 kubelet[3309]: I0903 23:25:08.179564 3309 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 3 23:25:08.180947 kubelet[3309]: I0903 23:25:08.179594 3309 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 3 23:25:08.180947 kubelet[3309]: I0903 23:25:08.179629 3309 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:25:08.180947 kubelet[3309]: I0903 23:25:08.179867 3309 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 3 23:25:08.180947 kubelet[3309]: I0903 23:25:08.179890 3309 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 3 23:25:08.180947 kubelet[3309]: I0903 23:25:08.179920 3309 policy_none.go:49] "None policy: Start" Sep 3 23:25:08.180947 kubelet[3309]: I0903 23:25:08.179940 3309 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 3 23:25:08.180947 kubelet[3309]: I0903 23:25:08.179960 3309 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:25:08.180947 kubelet[3309]: I0903 23:25:08.180126 3309 state_mem.go:75] "Updated machine memory state" Sep 3 23:25:08.184413 kubelet[3309]: E0903 23:25:08.184368 3309 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 3 23:25:08.191855 kubelet[3309]: E0903 23:25:08.191747 3309 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 3 23:25:08.194093 kubelet[3309]: I0903 23:25:08.194019 3309 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:25:08.194354 kubelet[3309]: I0903 23:25:08.194240 3309 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:25:08.197805 kubelet[3309]: I0903 23:25:08.195406 3309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:25:08.199909 kubelet[3309]: E0903 23:25:08.199848 3309 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 3 23:25:08.328990 kubelet[3309]: I0903 23:25:08.328868 3309 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-232" Sep 3 23:25:08.349153 kubelet[3309]: I0903 23:25:08.349098 3309 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-22-232" Sep 3 23:25:08.349423 kubelet[3309]: I0903 23:25:08.349403 3309 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-232" Sep 3 23:25:08.390826 kubelet[3309]: I0903 23:25:08.388384 3309 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-232" Sep 3 23:25:08.392294 kubelet[3309]: I0903 23:25:08.391193 3309 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-232" Sep 3 23:25:08.392294 kubelet[3309]: I0903 23:25:08.386777 3309 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-232" Sep 3 23:25:08.493512 kubelet[3309]: I0903 23:25:08.492381 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/107efa80d34dd7d1df2a1446f59b4fad-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-232\" (UID: \"107efa80d34dd7d1df2a1446f59b4fad\") " pod="kube-system/kube-apiserver-ip-172-31-22-232" Sep 3 23:25:08.493512 kubelet[3309]: I0903 23:25:08.492450 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/639cdba06055900fc73b7550f29c98ae-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"639cdba06055900fc73b7550f29c98ae\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232" Sep 3 23:25:08.493512 kubelet[3309]: I0903 23:25:08.492490 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/639cdba06055900fc73b7550f29c98ae-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"639cdba06055900fc73b7550f29c98ae\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232" Sep 3 23:25:08.493512 kubelet[3309]: I0903 23:25:08.492533 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/107efa80d34dd7d1df2a1446f59b4fad-ca-certs\") pod \"kube-apiserver-ip-172-31-22-232\" (UID: \"107efa80d34dd7d1df2a1446f59b4fad\") " pod="kube-system/kube-apiserver-ip-172-31-22-232" Sep 3 23:25:08.493512 kubelet[3309]: I0903 23:25:08.492570 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/107efa80d34dd7d1df2a1446f59b4fad-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-232\" (UID: \"107efa80d34dd7d1df2a1446f59b4fad\") " pod="kube-system/kube-apiserver-ip-172-31-22-232" Sep 3 23:25:08.493874 kubelet[3309]: I0903 23:25:08.492605 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/639cdba06055900fc73b7550f29c98ae-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"639cdba06055900fc73b7550f29c98ae\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232" Sep 3 23:25:08.493874 kubelet[3309]: I0903 23:25:08.492639 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/639cdba06055900fc73b7550f29c98ae-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"639cdba06055900fc73b7550f29c98ae\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232" Sep 3 23:25:08.493874 kubelet[3309]: I0903 23:25:08.492676 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/639cdba06055900fc73b7550f29c98ae-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"639cdba06055900fc73b7550f29c98ae\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232" Sep 3 23:25:08.493874 kubelet[3309]: I0903 23:25:08.492714 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b1fcf5f573b8ac641900493fa8d193c-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-232\" (UID: \"5b1fcf5f573b8ac641900493fa8d193c\") " pod="kube-system/kube-scheduler-ip-172-31-22-232" Sep 3 23:25:08.912807 kubelet[3309]: I0903 23:25:08.912739 3309 apiserver.go:52] "Watching apiserver" Sep 3 23:25:08.971927 kubelet[3309]: I0903 23:25:08.971858 3309 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 3 23:25:09.098686 kubelet[3309]: I0903 23:25:09.098402 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-232" podStartSLOduration=1.098376334 podStartE2EDuration="1.098376334s" podCreationTimestamp="2025-09-03 23:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:25:09.078206446 +0000 UTC m=+1.328633959" watchObservedRunningTime="2025-09-03 23:25:09.098376334 +0000 UTC m=+1.348803847" Sep 3 23:25:09.126866 kubelet[3309]: I0903 23:25:09.126314 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-232" podStartSLOduration=1.126290483 podStartE2EDuration="1.126290483s" podCreationTimestamp="2025-09-03 23:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:25:09.10047493 +0000 UTC m=+1.350902479" watchObservedRunningTime="2025-09-03 23:25:09.126290483 +0000 UTC m=+1.376717984" Sep 3 23:25:09.154648 kubelet[3309]: I0903 23:25:09.154541 3309 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-232" Sep 3 23:25:09.180706 kubelet[3309]: E0903 23:25:09.180560 3309 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-232\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-232" Sep 3 23:25:09.196753 kubelet[3309]: I0903 23:25:09.196673 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-232" podStartSLOduration=1.196654235 podStartE2EDuration="1.196654235s" podCreationTimestamp="2025-09-03 23:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:25:09.137902499 +0000 UTC m=+1.388330024" watchObservedRunningTime="2025-09-03 23:25:09.196654235 +0000 UTC m=+1.447081772" Sep 3 23:25:10.689360 kubelet[3309]: I0903 23:25:10.689292 3309 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 3 23:25:10.690814 containerd[2014]: time="2025-09-03T23:25:10.690687002Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 3 23:25:10.691982 kubelet[3309]: I0903 23:25:10.691940 3309 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 3 23:25:11.840872 systemd[1]: Created slice kubepods-besteffort-podb69a9de9_cfa3_4fad_b40e_67c99414eaf4.slice - libcontainer container kubepods-besteffort-podb69a9de9_cfa3_4fad_b40e_67c99414eaf4.slice. Sep 3 23:25:11.913272 kubelet[3309]: I0903 23:25:11.913214 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b69a9de9-cfa3-4fad-b40e-67c99414eaf4-kube-proxy\") pod \"kube-proxy-vg4m8\" (UID: \"b69a9de9-cfa3-4fad-b40e-67c99414eaf4\") " pod="kube-system/kube-proxy-vg4m8" Sep 3 23:25:11.914236 kubelet[3309]: I0903 23:25:11.913284 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b69a9de9-cfa3-4fad-b40e-67c99414eaf4-xtables-lock\") pod \"kube-proxy-vg4m8\" (UID: \"b69a9de9-cfa3-4fad-b40e-67c99414eaf4\") " pod="kube-system/kube-proxy-vg4m8" Sep 3 23:25:11.914236 kubelet[3309]: I0903 23:25:11.913327 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b69a9de9-cfa3-4fad-b40e-67c99414eaf4-lib-modules\") pod \"kube-proxy-vg4m8\" (UID: \"b69a9de9-cfa3-4fad-b40e-67c99414eaf4\") " pod="kube-system/kube-proxy-vg4m8" Sep 3 23:25:11.914236 kubelet[3309]: I0903 23:25:11.913368 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqbhx\" (UniqueName: \"kubernetes.io/projected/b69a9de9-cfa3-4fad-b40e-67c99414eaf4-kube-api-access-cqbhx\") pod \"kube-proxy-vg4m8\" (UID: \"b69a9de9-cfa3-4fad-b40e-67c99414eaf4\") " pod="kube-system/kube-proxy-vg4m8" Sep 3 23:25:11.973572 systemd[1]: Created slice kubepods-besteffort-poddb975dc2_d8ad_46dc_99cb_eb209ea141f3.slice - libcontainer container kubepods-besteffort-poddb975dc2_d8ad_46dc_99cb_eb209ea141f3.slice. Sep 3 23:25:12.014839 kubelet[3309]: I0903 23:25:12.014682 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/db975dc2-d8ad-46dc-99cb-eb209ea141f3-var-lib-calico\") pod \"tigera-operator-755d956888-p58l2\" (UID: \"db975dc2-d8ad-46dc-99cb-eb209ea141f3\") " pod="tigera-operator/tigera-operator-755d956888-p58l2" Sep 3 23:25:12.016846 kubelet[3309]: I0903 23:25:12.014763 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v4mm\" (UniqueName: \"kubernetes.io/projected/db975dc2-d8ad-46dc-99cb-eb209ea141f3-kube-api-access-8v4mm\") pod \"tigera-operator-755d956888-p58l2\" (UID: \"db975dc2-d8ad-46dc-99cb-eb209ea141f3\") " pod="tigera-operator/tigera-operator-755d956888-p58l2" Sep 3 23:25:12.161233 containerd[2014]: time="2025-09-03T23:25:12.160906358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vg4m8,Uid:b69a9de9-cfa3-4fad-b40e-67c99414eaf4,Namespace:kube-system,Attempt:0,}" Sep 3 23:25:12.193694 containerd[2014]: time="2025-09-03T23:25:12.193548542Z" level=info msg="connecting to shim 81439b63019af72cb290770fd452bbc40939183ad49993abc321f50d555825fe" address="unix:///run/containerd/s/9aded672e5e7528e7c14c09b6e091dbc1a7c8cf79ecf2e038e96efe9b5300660" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:25:12.239194 systemd[1]: Started cri-containerd-81439b63019af72cb290770fd452bbc40939183ad49993abc321f50d555825fe.scope - libcontainer container 81439b63019af72cb290770fd452bbc40939183ad49993abc321f50d555825fe. Sep 3 23:25:12.282758 containerd[2014]: time="2025-09-03T23:25:12.282382610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-p58l2,Uid:db975dc2-d8ad-46dc-99cb-eb209ea141f3,Namespace:tigera-operator,Attempt:0,}" Sep 3 23:25:12.293265 containerd[2014]: time="2025-09-03T23:25:12.293158190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vg4m8,Uid:b69a9de9-cfa3-4fad-b40e-67c99414eaf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"81439b63019af72cb290770fd452bbc40939183ad49993abc321f50d555825fe\"" Sep 3 23:25:12.318833 containerd[2014]: time="2025-09-03T23:25:12.318678422Z" level=info msg="CreateContainer within sandbox \"81439b63019af72cb290770fd452bbc40939183ad49993abc321f50d555825fe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 3 23:25:12.337072 containerd[2014]: time="2025-09-03T23:25:12.336999398Z" level=info msg="connecting to shim 356a2acdd291b55eedc7652d489bdaaf8079814ab27497da82ab8ff7ed60be4b" address="unix:///run/containerd/s/9c0df559a50606cc6b7f64c6ee86a045b62a48747bbc379e1c6d6cf25cdf751d" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:25:12.340201 containerd[2014]: time="2025-09-03T23:25:12.340135334Z" level=info msg="Container 26c056800f7aa49a3e3cba9acbbeb281a67f0e8ddd8db2ac57b100657b5ad3f6: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:12.357761 containerd[2014]: time="2025-09-03T23:25:12.357680403Z" level=info msg="CreateContainer within sandbox \"81439b63019af72cb290770fd452bbc40939183ad49993abc321f50d555825fe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"26c056800f7aa49a3e3cba9acbbeb281a67f0e8ddd8db2ac57b100657b5ad3f6\"" Sep 3 23:25:12.360086 containerd[2014]: time="2025-09-03T23:25:12.360005631Z" level=info msg="StartContainer for \"26c056800f7aa49a3e3cba9acbbeb281a67f0e8ddd8db2ac57b100657b5ad3f6\"" Sep 3 23:25:12.365466 containerd[2014]: time="2025-09-03T23:25:12.365374899Z" level=info msg="connecting to shim 26c056800f7aa49a3e3cba9acbbeb281a67f0e8ddd8db2ac57b100657b5ad3f6" address="unix:///run/containerd/s/9aded672e5e7528e7c14c09b6e091dbc1a7c8cf79ecf2e038e96efe9b5300660" protocol=ttrpc version=3 Sep 3 23:25:12.393163 systemd[1]: Started cri-containerd-356a2acdd291b55eedc7652d489bdaaf8079814ab27497da82ab8ff7ed60be4b.scope - libcontainer container 356a2acdd291b55eedc7652d489bdaaf8079814ab27497da82ab8ff7ed60be4b. Sep 3 23:25:12.431060 systemd[1]: Started cri-containerd-26c056800f7aa49a3e3cba9acbbeb281a67f0e8ddd8db2ac57b100657b5ad3f6.scope - libcontainer container 26c056800f7aa49a3e3cba9acbbeb281a67f0e8ddd8db2ac57b100657b5ad3f6. Sep 3 23:25:12.529564 containerd[2014]: time="2025-09-03T23:25:12.529423527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-p58l2,Uid:db975dc2-d8ad-46dc-99cb-eb209ea141f3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"356a2acdd291b55eedc7652d489bdaaf8079814ab27497da82ab8ff7ed60be4b\"" Sep 3 23:25:12.534967 containerd[2014]: time="2025-09-03T23:25:12.534562407Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 3 23:25:12.564680 containerd[2014]: time="2025-09-03T23:25:12.564625468Z" level=info msg="StartContainer for \"26c056800f7aa49a3e3cba9acbbeb281a67f0e8ddd8db2ac57b100657b5ad3f6\" returns successfully" Sep 3 23:25:13.195853 kubelet[3309]: I0903 23:25:13.195592 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vg4m8" podStartSLOduration=2.195571599 podStartE2EDuration="2.195571599s" podCreationTimestamp="2025-09-03 23:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:25:13.192397227 +0000 UTC m=+5.442825004" watchObservedRunningTime="2025-09-03 23:25:13.195571599 +0000 UTC m=+5.445999112" Sep 3 23:25:13.448872 update_engine[1983]: I20250903 23:25:13.447840 1983 update_attempter.cc:509] Updating boot flags... Sep 3 23:25:14.061867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3984212384.mount: Deactivated successfully. Sep 3 23:25:15.407104 containerd[2014]: time="2025-09-03T23:25:15.407037906Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:15.408549 containerd[2014]: time="2025-09-03T23:25:15.408495978Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 3 23:25:15.410094 containerd[2014]: time="2025-09-03T23:25:15.409500306Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:15.413198 containerd[2014]: time="2025-09-03T23:25:15.413137530Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:15.414896 containerd[2014]: time="2025-09-03T23:25:15.414836490Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.879778399s" Sep 3 23:25:15.414987 containerd[2014]: time="2025-09-03T23:25:15.414893838Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 3 23:25:15.422849 containerd[2014]: time="2025-09-03T23:25:15.422573310Z" level=info msg="CreateContainer within sandbox \"356a2acdd291b55eedc7652d489bdaaf8079814ab27497da82ab8ff7ed60be4b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 3 23:25:15.433632 containerd[2014]: time="2025-09-03T23:25:15.432732894Z" level=info msg="Container 39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:15.445343 containerd[2014]: time="2025-09-03T23:25:15.445294230Z" level=info msg="CreateContainer within sandbox \"356a2acdd291b55eedc7652d489bdaaf8079814ab27497da82ab8ff7ed60be4b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f\"" Sep 3 23:25:15.446439 containerd[2014]: time="2025-09-03T23:25:15.446396358Z" level=info msg="StartContainer for \"39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f\"" Sep 3 23:25:15.450644 containerd[2014]: time="2025-09-03T23:25:15.450581970Z" level=info msg="connecting to shim 39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f" address="unix:///run/containerd/s/9c0df559a50606cc6b7f64c6ee86a045b62a48747bbc379e1c6d6cf25cdf751d" protocol=ttrpc version=3 Sep 3 23:25:15.509078 systemd[1]: Started cri-containerd-39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f.scope - libcontainer container 39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f. Sep 3 23:25:15.570051 containerd[2014]: time="2025-09-03T23:25:15.569981107Z" level=info msg="StartContainer for \"39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f\" returns successfully" Sep 3 23:25:16.235829 kubelet[3309]: I0903 23:25:16.234110 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-p58l2" podStartSLOduration=2.350653959 podStartE2EDuration="5.234090462s" podCreationTimestamp="2025-09-03 23:25:11 +0000 UTC" firstStartedPulling="2025-09-03 23:25:12.532885203 +0000 UTC m=+4.783312716" lastFinishedPulling="2025-09-03 23:25:15.416321718 +0000 UTC m=+7.666749219" observedRunningTime="2025-09-03 23:25:16.216717918 +0000 UTC m=+8.467145431" watchObservedRunningTime="2025-09-03 23:25:16.234090462 +0000 UTC m=+8.484517963" Sep 3 23:25:24.246113 sudo[2346]: pam_unix(sudo:session): session closed for user root Sep 3 23:25:24.273911 sshd[2345]: Connection closed by 139.178.89.65 port 52376 Sep 3 23:25:24.275300 sshd-session[2343]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:24.285165 systemd[1]: sshd@6-172.31.22.232:22-139.178.89.65:52376.service: Deactivated successfully. Sep 3 23:25:24.293281 systemd[1]: session-7.scope: Deactivated successfully. Sep 3 23:25:24.293657 systemd[1]: session-7.scope: Consumed 11.871s CPU time, 235.8M memory peak. Sep 3 23:25:24.301267 systemd-logind[1979]: Session 7 logged out. Waiting for processes to exit. Sep 3 23:25:24.305407 systemd-logind[1979]: Removed session 7. Sep 3 23:25:35.113456 systemd[1]: Created slice kubepods-besteffort-podd9918371_c746_4a8d_9ad8_5fe15b3631dc.slice - libcontainer container kubepods-besteffort-podd9918371_c746_4a8d_9ad8_5fe15b3631dc.slice. Sep 3 23:25:35.218210 kubelet[3309]: I0903 23:25:35.218132 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9918371-c746-4a8d-9ad8-5fe15b3631dc-tigera-ca-bundle\") pod \"calico-typha-5896f4b585-g7d4q\" (UID: \"d9918371-c746-4a8d-9ad8-5fe15b3631dc\") " pod="calico-system/calico-typha-5896f4b585-g7d4q" Sep 3 23:25:35.218210 kubelet[3309]: I0903 23:25:35.218217 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d9918371-c746-4a8d-9ad8-5fe15b3631dc-typha-certs\") pod \"calico-typha-5896f4b585-g7d4q\" (UID: \"d9918371-c746-4a8d-9ad8-5fe15b3631dc\") " pod="calico-system/calico-typha-5896f4b585-g7d4q" Sep 3 23:25:35.218210 kubelet[3309]: I0903 23:25:35.218285 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ks65\" (UniqueName: \"kubernetes.io/projected/d9918371-c746-4a8d-9ad8-5fe15b3631dc-kube-api-access-9ks65\") pod \"calico-typha-5896f4b585-g7d4q\" (UID: \"d9918371-c746-4a8d-9ad8-5fe15b3631dc\") " pod="calico-system/calico-typha-5896f4b585-g7d4q" Sep 3 23:25:35.422376 containerd[2014]: time="2025-09-03T23:25:35.422099269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5896f4b585-g7d4q,Uid:d9918371-c746-4a8d-9ad8-5fe15b3631dc,Namespace:calico-system,Attempt:0,}" Sep 3 23:25:35.428334 systemd[1]: Created slice kubepods-besteffort-podf5addc06_9b78_4f6b_b01f_6f9986e3f94c.slice - libcontainer container kubepods-besteffort-podf5addc06_9b78_4f6b_b01f_6f9986e3f94c.slice. Sep 3 23:25:35.478260 containerd[2014]: time="2025-09-03T23:25:35.477925093Z" level=info msg="connecting to shim 9c20f48e20c8a27bce858b6b7bd1da4aa9925e9fff4f2c28a48c650928e4c714" address="unix:///run/containerd/s/cc580a57ef4797e3655960dc4fe5713063e5b8c61c941aa06afe31fc40bc0551" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:25:35.524867 kubelet[3309]: I0903 23:25:35.524639 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f5addc06-9b78-4f6b-b01f-6f9986e3f94c-cni-net-dir\") pod \"calico-node-g6jr4\" (UID: \"f5addc06-9b78-4f6b-b01f-6f9986e3f94c\") " pod="calico-system/calico-node-g6jr4" Sep 3 23:25:35.527208 kubelet[3309]: I0903 23:25:35.525139 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5addc06-9b78-4f6b-b01f-6f9986e3f94c-lib-modules\") pod \"calico-node-g6jr4\" (UID: \"f5addc06-9b78-4f6b-b01f-6f9986e3f94c\") " pod="calico-system/calico-node-g6jr4" Sep 3 23:25:35.527208 kubelet[3309]: I0903 23:25:35.527037 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5addc06-9b78-4f6b-b01f-6f9986e3f94c-xtables-lock\") pod \"calico-node-g6jr4\" (UID: \"f5addc06-9b78-4f6b-b01f-6f9986e3f94c\") " pod="calico-system/calico-node-g6jr4" Sep 3 23:25:35.527208 kubelet[3309]: I0903 23:25:35.527145 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5addc06-9b78-4f6b-b01f-6f9986e3f94c-tigera-ca-bundle\") pod \"calico-node-g6jr4\" (UID: \"f5addc06-9b78-4f6b-b01f-6f9986e3f94c\") " pod="calico-system/calico-node-g6jr4" Sep 3 23:25:35.527660 kubelet[3309]: I0903 23:25:35.527516 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f5addc06-9b78-4f6b-b01f-6f9986e3f94c-node-certs\") pod \"calico-node-g6jr4\" (UID: \"f5addc06-9b78-4f6b-b01f-6f9986e3f94c\") " pod="calico-system/calico-node-g6jr4" Sep 3 23:25:35.527660 kubelet[3309]: I0903 23:25:35.527590 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f5addc06-9b78-4f6b-b01f-6f9986e3f94c-var-lib-calico\") pod \"calico-node-g6jr4\" (UID: \"f5addc06-9b78-4f6b-b01f-6f9986e3f94c\") " pod="calico-system/calico-node-g6jr4" Sep 3 23:25:35.528697 kubelet[3309]: I0903 23:25:35.527632 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f5addc06-9b78-4f6b-b01f-6f9986e3f94c-var-run-calico\") pod \"calico-node-g6jr4\" (UID: \"f5addc06-9b78-4f6b-b01f-6f9986e3f94c\") " pod="calico-system/calico-node-g6jr4" Sep 3 23:25:35.530583 kubelet[3309]: I0903 23:25:35.530222 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6s86\" (UniqueName: \"kubernetes.io/projected/f5addc06-9b78-4f6b-b01f-6f9986e3f94c-kube-api-access-w6s86\") pod \"calico-node-g6jr4\" (UID: \"f5addc06-9b78-4f6b-b01f-6f9986e3f94c\") " pod="calico-system/calico-node-g6jr4" Sep 3 23:25:35.530583 kubelet[3309]: I0903 23:25:35.530355 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f5addc06-9b78-4f6b-b01f-6f9986e3f94c-cni-bin-dir\") pod \"calico-node-g6jr4\" (UID: \"f5addc06-9b78-4f6b-b01f-6f9986e3f94c\") " pod="calico-system/calico-node-g6jr4" Sep 3 23:25:35.530583 kubelet[3309]: I0903 23:25:35.530396 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f5addc06-9b78-4f6b-b01f-6f9986e3f94c-flexvol-driver-host\") pod \"calico-node-g6jr4\" (UID: \"f5addc06-9b78-4f6b-b01f-6f9986e3f94c\") " pod="calico-system/calico-node-g6jr4" Sep 3 23:25:35.530583 kubelet[3309]: I0903 23:25:35.530434 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f5addc06-9b78-4f6b-b01f-6f9986e3f94c-policysync\") pod \"calico-node-g6jr4\" (UID: \"f5addc06-9b78-4f6b-b01f-6f9986e3f94c\") " pod="calico-system/calico-node-g6jr4" Sep 3 23:25:35.530583 kubelet[3309]: I0903 23:25:35.530485 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f5addc06-9b78-4f6b-b01f-6f9986e3f94c-cni-log-dir\") pod \"calico-node-g6jr4\" (UID: \"f5addc06-9b78-4f6b-b01f-6f9986e3f94c\") " pod="calico-system/calico-node-g6jr4" Sep 3 23:25:35.564486 systemd[1]: Started cri-containerd-9c20f48e20c8a27bce858b6b7bd1da4aa9925e9fff4f2c28a48c650928e4c714.scope - libcontainer container 9c20f48e20c8a27bce858b6b7bd1da4aa9925e9fff4f2c28a48c650928e4c714. Sep 3 23:25:35.602739 kubelet[3309]: E0903 23:25:35.602608 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w22l4" podUID="e6027470-9212-46bc-a2f0-6968361afd03" Sep 3 23:25:35.640594 kubelet[3309]: E0903 23:25:35.640542 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.641061 kubelet[3309]: W0903 23:25:35.640757 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.646462 kubelet[3309]: E0903 23:25:35.646400 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.647246 kubelet[3309]: E0903 23:25:35.647194 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.647246 kubelet[3309]: W0903 23:25:35.647230 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.647535 kubelet[3309]: E0903 23:25:35.647261 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.650644 kubelet[3309]: E0903 23:25:35.650589 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.650644 kubelet[3309]: W0903 23:25:35.650627 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.651089 kubelet[3309]: E0903 23:25:35.650661 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.654204 kubelet[3309]: E0903 23:25:35.654117 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.654204 kubelet[3309]: W0903 23:25:35.654157 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.654204 kubelet[3309]: E0903 23:25:35.654190 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.656629 kubelet[3309]: E0903 23:25:35.656572 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.656629 kubelet[3309]: W0903 23:25:35.656622 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.656957 kubelet[3309]: E0903 23:25:35.656655 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.660066 kubelet[3309]: E0903 23:25:35.660015 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.660066 kubelet[3309]: W0903 23:25:35.660054 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.660281 kubelet[3309]: E0903 23:25:35.660091 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.661984 kubelet[3309]: E0903 23:25:35.661934 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.661984 kubelet[3309]: W0903 23:25:35.661972 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.662199 kubelet[3309]: E0903 23:25:35.662004 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.663274 kubelet[3309]: E0903 23:25:35.663230 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.663274 kubelet[3309]: W0903 23:25:35.663265 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.663490 kubelet[3309]: E0903 23:25:35.663295 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.665139 kubelet[3309]: E0903 23:25:35.665096 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.665139 kubelet[3309]: W0903 23:25:35.665133 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.665273 kubelet[3309]: E0903 23:25:35.665166 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.667097 kubelet[3309]: E0903 23:25:35.667046 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.667097 kubelet[3309]: W0903 23:25:35.667087 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.667304 kubelet[3309]: E0903 23:25:35.667121 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.667621 kubelet[3309]: E0903 23:25:35.667589 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.667621 kubelet[3309]: W0903 23:25:35.667617 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.667739 kubelet[3309]: E0903 23:25:35.667642 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.668086 kubelet[3309]: E0903 23:25:35.668055 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.668086 kubelet[3309]: W0903 23:25:35.668081 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.668219 kubelet[3309]: E0903 23:25:35.668105 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.668993 kubelet[3309]: E0903 23:25:35.668944 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.669120 kubelet[3309]: W0903 23:25:35.668981 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.669120 kubelet[3309]: E0903 23:25:35.669056 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.669869 kubelet[3309]: E0903 23:25:35.669696 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.669971 kubelet[3309]: W0903 23:25:35.669873 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.669971 kubelet[3309]: E0903 23:25:35.669903 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.671223 kubelet[3309]: E0903 23:25:35.671172 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.671223 kubelet[3309]: W0903 23:25:35.671207 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.671408 kubelet[3309]: E0903 23:25:35.671237 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.671694 kubelet[3309]: E0903 23:25:35.671655 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.671694 kubelet[3309]: W0903 23:25:35.671684 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.672280 kubelet[3309]: E0903 23:25:35.671708 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.672715 kubelet[3309]: E0903 23:25:35.672607 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.672715 kubelet[3309]: W0903 23:25:35.672642 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.672715 kubelet[3309]: E0903 23:25:35.672693 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.674210 kubelet[3309]: E0903 23:25:35.673891 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.674210 kubelet[3309]: W0903 23:25:35.673916 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.674210 kubelet[3309]: E0903 23:25:35.673943 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.676472 kubelet[3309]: E0903 23:25:35.676310 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.676472 kubelet[3309]: W0903 23:25:35.676349 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.676472 kubelet[3309]: E0903 23:25:35.676382 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.678031 kubelet[3309]: E0903 23:25:35.676990 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.678031 kubelet[3309]: W0903 23:25:35.677010 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.678031 kubelet[3309]: E0903 23:25:35.677034 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.679361 kubelet[3309]: E0903 23:25:35.678588 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.679361 kubelet[3309]: W0903 23:25:35.678623 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.679361 kubelet[3309]: E0903 23:25:35.678664 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.679361 kubelet[3309]: E0903 23:25:35.679064 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.679361 kubelet[3309]: W0903 23:25:35.679080 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.679361 kubelet[3309]: E0903 23:25:35.679098 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.681228 kubelet[3309]: E0903 23:25:35.680685 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.681228 kubelet[3309]: W0903 23:25:35.680710 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.681228 kubelet[3309]: E0903 23:25:35.680739 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.681228 kubelet[3309]: E0903 23:25:35.681228 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.681429 kubelet[3309]: W0903 23:25:35.681246 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.681429 kubelet[3309]: E0903 23:25:35.681320 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.682932 kubelet[3309]: E0903 23:25:35.681574 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.682932 kubelet[3309]: W0903 23:25:35.681602 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.682932 kubelet[3309]: E0903 23:25:35.681625 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.682932 kubelet[3309]: E0903 23:25:35.681928 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.682932 kubelet[3309]: W0903 23:25:35.681946 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.682932 kubelet[3309]: E0903 23:25:35.681970 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.682932 kubelet[3309]: E0903 23:25:35.682301 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.682932 kubelet[3309]: W0903 23:25:35.682319 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.682932 kubelet[3309]: E0903 23:25:35.682342 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.682932 kubelet[3309]: E0903 23:25:35.682649 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.683479 kubelet[3309]: W0903 23:25:35.682667 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.683479 kubelet[3309]: E0903 23:25:35.682689 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.683904 kubelet[3309]: E0903 23:25:35.683635 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.683904 kubelet[3309]: W0903 23:25:35.683672 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.683904 kubelet[3309]: E0903 23:25:35.683715 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.685365 kubelet[3309]: E0903 23:25:35.684317 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.685365 kubelet[3309]: W0903 23:25:35.684338 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.685365 kubelet[3309]: E0903 23:25:35.684364 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.687215 kubelet[3309]: E0903 23:25:35.685917 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.687215 kubelet[3309]: W0903 23:25:35.685943 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.687215 kubelet[3309]: E0903 23:25:35.685997 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.687215 kubelet[3309]: E0903 23:25:35.686342 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.687215 kubelet[3309]: W0903 23:25:35.686360 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.687215 kubelet[3309]: E0903 23:25:35.686402 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.687215 kubelet[3309]: E0903 23:25:35.686733 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.687215 kubelet[3309]: W0903 23:25:35.686750 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.687215 kubelet[3309]: E0903 23:25:35.686771 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.687215 kubelet[3309]: E0903 23:25:35.687125 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.687686 kubelet[3309]: W0903 23:25:35.687143 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.687686 kubelet[3309]: E0903 23:25:35.687164 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.702526 kubelet[3309]: E0903 23:25:35.702486 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.702728 kubelet[3309]: W0903 23:25:35.702702 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.703044 kubelet[3309]: E0903 23:25:35.702903 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.735256 kubelet[3309]: E0903 23:25:35.735200 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.735611 kubelet[3309]: W0903 23:25:35.735452 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.735611 kubelet[3309]: E0903 23:25:35.735531 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.737880 kubelet[3309]: I0903 23:25:35.735847 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e6027470-9212-46bc-a2f0-6968361afd03-registration-dir\") pod \"csi-node-driver-w22l4\" (UID: \"e6027470-9212-46bc-a2f0-6968361afd03\") " pod="calico-system/csi-node-driver-w22l4" Sep 3 23:25:35.738229 kubelet[3309]: E0903 23:25:35.736119 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.738229 kubelet[3309]: W0903 23:25:35.738116 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.738229 kubelet[3309]: E0903 23:25:35.738156 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.739141 kubelet[3309]: E0903 23:25:35.738928 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.739141 kubelet[3309]: W0903 23:25:35.738963 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.739141 kubelet[3309]: E0903 23:25:35.738994 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.739815 kubelet[3309]: E0903 23:25:35.739755 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.740194 kubelet[3309]: W0903 23:25:35.739981 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.740194 kubelet[3309]: E0903 23:25:35.740020 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.740194 kubelet[3309]: I0903 23:25:35.740080 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e6027470-9212-46bc-a2f0-6968361afd03-socket-dir\") pod \"csi-node-driver-w22l4\" (UID: \"e6027470-9212-46bc-a2f0-6968361afd03\") " pod="calico-system/csi-node-driver-w22l4" Sep 3 23:25:35.740664 kubelet[3309]: E0903 23:25:35.740639 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.740883 kubelet[3309]: W0903 23:25:35.740821 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.741088 kubelet[3309]: E0903 23:25:35.740859 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.741088 kubelet[3309]: I0903 23:25:35.741020 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e6027470-9212-46bc-a2f0-6968361afd03-varrun\") pod \"csi-node-driver-w22l4\" (UID: \"e6027470-9212-46bc-a2f0-6968361afd03\") " pod="calico-system/csi-node-driver-w22l4" Sep 3 23:25:35.741373 kubelet[3309]: E0903 23:25:35.741335 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.741449 kubelet[3309]: W0903 23:25:35.741368 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.741449 kubelet[3309]: E0903 23:25:35.741396 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.743158 kubelet[3309]: E0903 23:25:35.743115 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.743158 kubelet[3309]: W0903 23:25:35.743153 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.743649 kubelet[3309]: E0903 23:25:35.743186 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.744055 kubelet[3309]: E0903 23:25:35.744005 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.744055 kubelet[3309]: W0903 23:25:35.744044 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.744226 kubelet[3309]: E0903 23:25:35.744074 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.744444 kubelet[3309]: I0903 23:25:35.744414 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn5j6\" (UniqueName: \"kubernetes.io/projected/e6027470-9212-46bc-a2f0-6968361afd03-kube-api-access-rn5j6\") pod \"csi-node-driver-w22l4\" (UID: \"e6027470-9212-46bc-a2f0-6968361afd03\") " pod="calico-system/csi-node-driver-w22l4" Sep 3 23:25:35.745329 kubelet[3309]: E0903 23:25:35.745282 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.745329 kubelet[3309]: W0903 23:25:35.745319 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.745579 kubelet[3309]: E0903 23:25:35.745351 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.747185 containerd[2014]: time="2025-09-03T23:25:35.747106587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g6jr4,Uid:f5addc06-9b78-4f6b-b01f-6f9986e3f94c,Namespace:calico-system,Attempt:0,}" Sep 3 23:25:35.747857 kubelet[3309]: E0903 23:25:35.747760 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.747857 kubelet[3309]: W0903 23:25:35.747849 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.748060 kubelet[3309]: E0903 23:25:35.747884 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.749067 kubelet[3309]: E0903 23:25:35.749016 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.749067 kubelet[3309]: W0903 23:25:35.749056 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.750191 kubelet[3309]: E0903 23:25:35.749089 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.750191 kubelet[3309]: I0903 23:25:35.749870 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6027470-9212-46bc-a2f0-6968361afd03-kubelet-dir\") pod \"csi-node-driver-w22l4\" (UID: \"e6027470-9212-46bc-a2f0-6968361afd03\") " pod="calico-system/csi-node-driver-w22l4" Sep 3 23:25:35.750395 kubelet[3309]: E0903 23:25:35.750243 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.750395 kubelet[3309]: W0903 23:25:35.750264 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.750395 kubelet[3309]: E0903 23:25:35.750291 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.752495 kubelet[3309]: E0903 23:25:35.752271 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.752495 kubelet[3309]: W0903 23:25:35.752427 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.752873 kubelet[3309]: E0903 23:25:35.752459 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.755015 kubelet[3309]: E0903 23:25:35.754959 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.756388 kubelet[3309]: W0903 23:25:35.755204 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.756388 kubelet[3309]: E0903 23:25:35.755242 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.759064 kubelet[3309]: E0903 23:25:35.758909 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.759064 kubelet[3309]: W0903 23:25:35.758990 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.759333 kubelet[3309]: E0903 23:25:35.759026 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.797438 containerd[2014]: time="2025-09-03T23:25:35.797271471Z" level=info msg="connecting to shim 537de6898956845d336d6a131f32d0ef932d770a0dca5fe69633571599da7ccf" address="unix:///run/containerd/s/9a0520507a1adc54172e8064f727b8e5a850a764576e3e3d8679563e81546797" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:25:35.852000 kubelet[3309]: E0903 23:25:35.851879 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.852000 kubelet[3309]: W0903 23:25:35.851937 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.852389 kubelet[3309]: E0903 23:25:35.851970 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.854902 kubelet[3309]: E0903 23:25:35.854827 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.854902 kubelet[3309]: W0903 23:25:35.854861 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.855247 kubelet[3309]: E0903 23:25:35.855095 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.855972 kubelet[3309]: E0903 23:25:35.855854 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.855972 kubelet[3309]: W0903 23:25:35.855902 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.855972 kubelet[3309]: E0903 23:25:35.855932 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.858143 kubelet[3309]: E0903 23:25:35.858076 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.858464 kubelet[3309]: W0903 23:25:35.858109 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.858464 kubelet[3309]: E0903 23:25:35.858324 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.860100 kubelet[3309]: E0903 23:25:35.860039 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.860431 kubelet[3309]: W0903 23:25:35.860072 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.860431 kubelet[3309]: E0903 23:25:35.860315 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.861422 kubelet[3309]: E0903 23:25:35.861169 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.861422 kubelet[3309]: W0903 23:25:35.861300 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.861422 kubelet[3309]: E0903 23:25:35.861331 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.863990 kubelet[3309]: E0903 23:25:35.863865 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.863990 kubelet[3309]: W0903 23:25:35.863929 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.864250 kubelet[3309]: E0903 23:25:35.863960 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.864693 kubelet[3309]: E0903 23:25:35.864617 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.864824 kubelet[3309]: W0903 23:25:35.864672 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.865100 kubelet[3309]: E0903 23:25:35.864994 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.867052 kubelet[3309]: E0903 23:25:35.866156 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.867052 kubelet[3309]: W0903 23:25:35.867004 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.867431 kubelet[3309]: E0903 23:25:35.867281 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.868567 kubelet[3309]: E0903 23:25:35.868398 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.868861 kubelet[3309]: W0903 23:25:35.868446 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.868861 kubelet[3309]: E0903 23:25:35.868827 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.870455 kubelet[3309]: E0903 23:25:35.870411 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.870670 kubelet[3309]: W0903 23:25:35.870596 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.870670 kubelet[3309]: E0903 23:25:35.870633 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.872044 kubelet[3309]: E0903 23:25:35.871930 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.872494 kubelet[3309]: W0903 23:25:35.872212 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.873311 kubelet[3309]: E0903 23:25:35.872249 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.874449 kubelet[3309]: E0903 23:25:35.874314 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.874449 kubelet[3309]: W0903 23:25:35.874364 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.875164 kubelet[3309]: E0903 23:25:35.874933 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.875721 systemd[1]: Started cri-containerd-537de6898956845d336d6a131f32d0ef932d770a0dca5fe69633571599da7ccf.scope - libcontainer container 537de6898956845d336d6a131f32d0ef932d770a0dca5fe69633571599da7ccf. Sep 3 23:25:35.877183 kubelet[3309]: E0903 23:25:35.876921 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.877183 kubelet[3309]: W0903 23:25:35.876951 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.877183 kubelet[3309]: E0903 23:25:35.876982 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.879810 kubelet[3309]: E0903 23:25:35.878360 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.880038 kubelet[3309]: W0903 23:25:35.880002 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.880038 kubelet[3309]: E0903 23:25:35.880092 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.880862 kubelet[3309]: E0903 23:25:35.880834 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.881060 kubelet[3309]: W0903 23:25:35.880924 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.881060 kubelet[3309]: E0903 23:25:35.880953 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.883484 kubelet[3309]: E0903 23:25:35.883405 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.883943 kubelet[3309]: W0903 23:25:35.883637 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.883943 kubelet[3309]: E0903 23:25:35.883675 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.885712 kubelet[3309]: E0903 23:25:35.885622 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.886280 kubelet[3309]: W0903 23:25:35.885927 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.886551 kubelet[3309]: E0903 23:25:35.885969 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.887613 kubelet[3309]: E0903 23:25:35.887385 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.888801 kubelet[3309]: W0903 23:25:35.887962 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.888801 kubelet[3309]: E0903 23:25:35.888137 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.889135 kubelet[3309]: E0903 23:25:35.889106 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.889915 kubelet[3309]: W0903 23:25:35.889875 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.890292 kubelet[3309]: E0903 23:25:35.890053 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.890986 kubelet[3309]: E0903 23:25:35.890953 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.891315 kubelet[3309]: W0903 23:25:35.891170 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.891991 kubelet[3309]: E0903 23:25:35.891487 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.893069 kubelet[3309]: E0903 23:25:35.892824 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.893438 kubelet[3309]: W0903 23:25:35.893322 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.893765 kubelet[3309]: E0903 23:25:35.893645 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.894922 kubelet[3309]: E0903 23:25:35.894741 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.895839 kubelet[3309]: W0903 23:25:35.894774 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.895839 kubelet[3309]: E0903 23:25:35.895235 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.896420 kubelet[3309]: E0903 23:25:35.896291 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.896778 kubelet[3309]: W0903 23:25:35.896650 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.897328 kubelet[3309]: E0903 23:25:35.897034 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.899240 kubelet[3309]: E0903 23:25:35.898907 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.899240 kubelet[3309]: W0903 23:25:35.898940 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.899240 kubelet[3309]: E0903 23:25:35.898971 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:35.926533 kubelet[3309]: E0903 23:25:35.925474 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:25:35.926533 kubelet[3309]: W0903 23:25:35.925514 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:25:35.926533 kubelet[3309]: E0903 23:25:35.925551 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:25:36.189545 containerd[2014]: time="2025-09-03T23:25:36.189424033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g6jr4,Uid:f5addc06-9b78-4f6b-b01f-6f9986e3f94c,Namespace:calico-system,Attempt:0,} returns sandbox id \"537de6898956845d336d6a131f32d0ef932d770a0dca5fe69633571599da7ccf\"" Sep 3 23:25:36.197094 containerd[2014]: time="2025-09-03T23:25:36.197028913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 3 23:25:36.211823 containerd[2014]: time="2025-09-03T23:25:36.211739881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5896f4b585-g7d4q,Uid:d9918371-c746-4a8d-9ad8-5fe15b3631dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"9c20f48e20c8a27bce858b6b7bd1da4aa9925e9fff4f2c28a48c650928e4c714\"" Sep 3 23:25:37.085622 kubelet[3309]: E0903 23:25:37.084860 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w22l4" podUID="e6027470-9212-46bc-a2f0-6968361afd03" Sep 3 23:25:37.339701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1207686281.mount: Deactivated successfully. Sep 3 23:25:37.521451 containerd[2014]: time="2025-09-03T23:25:37.520177528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:37.525208 containerd[2014]: time="2025-09-03T23:25:37.525019048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5636193" Sep 3 23:25:37.527942 containerd[2014]: time="2025-09-03T23:25:37.527893636Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:37.533816 containerd[2014]: time="2025-09-03T23:25:37.531923908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:37.535481 containerd[2014]: time="2025-09-03T23:25:37.535418476Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.338320371s" Sep 3 23:25:37.535571 containerd[2014]: time="2025-09-03T23:25:37.535481800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 3 23:25:37.538517 containerd[2014]: time="2025-09-03T23:25:37.538439740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 3 23:25:37.545150 containerd[2014]: time="2025-09-03T23:25:37.545080732Z" level=info msg="CreateContainer within sandbox \"537de6898956845d336d6a131f32d0ef932d770a0dca5fe69633571599da7ccf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 3 23:25:37.561810 containerd[2014]: time="2025-09-03T23:25:37.559045048Z" level=info msg="Container 9d0f99dd94e648d47442ef4fe6bdf089ef2776ba0ef4113ac72728d542bfc9d8: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:37.576452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3848108915.mount: Deactivated successfully. Sep 3 23:25:37.584753 containerd[2014]: time="2025-09-03T23:25:37.584686192Z" level=info msg="CreateContainer within sandbox \"537de6898956845d336d6a131f32d0ef932d770a0dca5fe69633571599da7ccf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9d0f99dd94e648d47442ef4fe6bdf089ef2776ba0ef4113ac72728d542bfc9d8\"" Sep 3 23:25:37.585900 containerd[2014]: time="2025-09-03T23:25:37.585757840Z" level=info msg="StartContainer for \"9d0f99dd94e648d47442ef4fe6bdf089ef2776ba0ef4113ac72728d542bfc9d8\"" Sep 3 23:25:37.592997 containerd[2014]: time="2025-09-03T23:25:37.592171336Z" level=info msg="connecting to shim 9d0f99dd94e648d47442ef4fe6bdf089ef2776ba0ef4113ac72728d542bfc9d8" address="unix:///run/containerd/s/9a0520507a1adc54172e8064f727b8e5a850a764576e3e3d8679563e81546797" protocol=ttrpc version=3 Sep 3 23:25:37.643165 systemd[1]: Started cri-containerd-9d0f99dd94e648d47442ef4fe6bdf089ef2776ba0ef4113ac72728d542bfc9d8.scope - libcontainer container 9d0f99dd94e648d47442ef4fe6bdf089ef2776ba0ef4113ac72728d542bfc9d8. Sep 3 23:25:37.748818 containerd[2014]: time="2025-09-03T23:25:37.748613753Z" level=info msg="StartContainer for \"9d0f99dd94e648d47442ef4fe6bdf089ef2776ba0ef4113ac72728d542bfc9d8\" returns successfully" Sep 3 23:25:37.787324 systemd[1]: cri-containerd-9d0f99dd94e648d47442ef4fe6bdf089ef2776ba0ef4113ac72728d542bfc9d8.scope: Deactivated successfully. Sep 3 23:25:37.795032 containerd[2014]: time="2025-09-03T23:25:37.794963525Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d0f99dd94e648d47442ef4fe6bdf089ef2776ba0ef4113ac72728d542bfc9d8\" id:\"9d0f99dd94e648d47442ef4fe6bdf089ef2776ba0ef4113ac72728d542bfc9d8\" pid:4179 exited_at:{seconds:1756941937 nanos:794009597}" Sep 3 23:25:37.795270 containerd[2014]: time="2025-09-03T23:25:37.795121925Z" level=info msg="received exit event container_id:\"9d0f99dd94e648d47442ef4fe6bdf089ef2776ba0ef4113ac72728d542bfc9d8\" id:\"9d0f99dd94e648d47442ef4fe6bdf089ef2776ba0ef4113ac72728d542bfc9d8\" pid:4179 exited_at:{seconds:1756941937 nanos:794009597}" Sep 3 23:25:37.856339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d0f99dd94e648d47442ef4fe6bdf089ef2776ba0ef4113ac72728d542bfc9d8-rootfs.mount: Deactivated successfully. Sep 3 23:25:39.084675 kubelet[3309]: E0903 23:25:39.084606 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w22l4" podUID="e6027470-9212-46bc-a2f0-6968361afd03" Sep 3 23:25:39.944656 containerd[2014]: time="2025-09-03T23:25:39.944577080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:39.947320 containerd[2014]: time="2025-09-03T23:25:39.947264372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=31736396" Sep 3 23:25:39.949942 containerd[2014]: time="2025-09-03T23:25:39.949861040Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:39.954858 containerd[2014]: time="2025-09-03T23:25:39.954753608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:39.955798 containerd[2014]: time="2025-09-03T23:25:39.955741316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.417122224s" Sep 3 23:25:39.955965 containerd[2014]: time="2025-09-03T23:25:39.955934444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 3 23:25:39.959198 containerd[2014]: time="2025-09-03T23:25:39.959050604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 3 23:25:39.983408 containerd[2014]: time="2025-09-03T23:25:39.982438136Z" level=info msg="CreateContainer within sandbox \"9c20f48e20c8a27bce858b6b7bd1da4aa9925e9fff4f2c28a48c650928e4c714\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 3 23:25:40.010481 containerd[2014]: time="2025-09-03T23:25:40.009055564Z" level=info msg="Container e9f19fe07a8ae844abea270725847d6d0fc12b5346b850b1bd21e1b56c480e37: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:40.030028 containerd[2014]: time="2025-09-03T23:25:40.029953768Z" level=info msg="CreateContainer within sandbox \"9c20f48e20c8a27bce858b6b7bd1da4aa9925e9fff4f2c28a48c650928e4c714\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e9f19fe07a8ae844abea270725847d6d0fc12b5346b850b1bd21e1b56c480e37\"" Sep 3 23:25:40.031310 containerd[2014]: time="2025-09-03T23:25:40.031246768Z" level=info msg="StartContainer for \"e9f19fe07a8ae844abea270725847d6d0fc12b5346b850b1bd21e1b56c480e37\"" Sep 3 23:25:40.034576 containerd[2014]: time="2025-09-03T23:25:40.034403044Z" level=info msg="connecting to shim e9f19fe07a8ae844abea270725847d6d0fc12b5346b850b1bd21e1b56c480e37" address="unix:///run/containerd/s/cc580a57ef4797e3655960dc4fe5713063e5b8c61c941aa06afe31fc40bc0551" protocol=ttrpc version=3 Sep 3 23:25:40.082135 systemd[1]: Started cri-containerd-e9f19fe07a8ae844abea270725847d6d0fc12b5346b850b1bd21e1b56c480e37.scope - libcontainer container e9f19fe07a8ae844abea270725847d6d0fc12b5346b850b1bd21e1b56c480e37. Sep 3 23:25:40.175857 containerd[2014]: time="2025-09-03T23:25:40.175762517Z" level=info msg="StartContainer for \"e9f19fe07a8ae844abea270725847d6d0fc12b5346b850b1bd21e1b56c480e37\" returns successfully" Sep 3 23:25:41.088570 kubelet[3309]: E0903 23:25:41.087069 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w22l4" podUID="e6027470-9212-46bc-a2f0-6968361afd03" Sep 3 23:25:41.313824 kubelet[3309]: I0903 23:25:41.313643 3309 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 3 23:25:43.085503 kubelet[3309]: E0903 23:25:43.085435 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w22l4" podUID="e6027470-9212-46bc-a2f0-6968361afd03" Sep 3 23:25:43.116337 containerd[2014]: time="2025-09-03T23:25:43.116285107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:43.118280 containerd[2014]: time="2025-09-03T23:25:43.118227079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 3 23:25:43.119968 containerd[2014]: time="2025-09-03T23:25:43.119892823Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:43.124860 containerd[2014]: time="2025-09-03T23:25:43.124380043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:43.125807 containerd[2014]: time="2025-09-03T23:25:43.125748691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.166558155s" Sep 3 23:25:43.125970 containerd[2014]: time="2025-09-03T23:25:43.125939791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 3 23:25:43.134150 containerd[2014]: time="2025-09-03T23:25:43.133951459Z" level=info msg="CreateContainer within sandbox \"537de6898956845d336d6a131f32d0ef932d770a0dca5fe69633571599da7ccf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 3 23:25:43.154947 containerd[2014]: time="2025-09-03T23:25:43.153046004Z" level=info msg="Container 77185afc51275585dbefee5b65f4ead053069d770b228e65846081195b1a29f2: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:43.175529 containerd[2014]: time="2025-09-03T23:25:43.175469252Z" level=info msg="CreateContainer within sandbox \"537de6898956845d336d6a131f32d0ef932d770a0dca5fe69633571599da7ccf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"77185afc51275585dbefee5b65f4ead053069d770b228e65846081195b1a29f2\"" Sep 3 23:25:43.176453 containerd[2014]: time="2025-09-03T23:25:43.176398172Z" level=info msg="StartContainer for \"77185afc51275585dbefee5b65f4ead053069d770b228e65846081195b1a29f2\"" Sep 3 23:25:43.180160 containerd[2014]: time="2025-09-03T23:25:43.180038408Z" level=info msg="connecting to shim 77185afc51275585dbefee5b65f4ead053069d770b228e65846081195b1a29f2" address="unix:///run/containerd/s/9a0520507a1adc54172e8064f727b8e5a850a764576e3e3d8679563e81546797" protocol=ttrpc version=3 Sep 3 23:25:43.228080 systemd[1]: Started cri-containerd-77185afc51275585dbefee5b65f4ead053069d770b228e65846081195b1a29f2.scope - libcontainer container 77185afc51275585dbefee5b65f4ead053069d770b228e65846081195b1a29f2. Sep 3 23:25:43.313972 containerd[2014]: time="2025-09-03T23:25:43.313909268Z" level=info msg="StartContainer for \"77185afc51275585dbefee5b65f4ead053069d770b228e65846081195b1a29f2\" returns successfully" Sep 3 23:25:43.367226 kubelet[3309]: I0903 23:25:43.365758 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5896f4b585-g7d4q" podStartSLOduration=4.627196806 podStartE2EDuration="8.365629029s" podCreationTimestamp="2025-09-03 23:25:35 +0000 UTC" firstStartedPulling="2025-09-03 23:25:36.219031465 +0000 UTC m=+28.469458978" lastFinishedPulling="2025-09-03 23:25:39.957463688 +0000 UTC m=+32.207891201" observedRunningTime="2025-09-03 23:25:40.33976347 +0000 UTC m=+32.590190995" watchObservedRunningTime="2025-09-03 23:25:43.365629029 +0000 UTC m=+35.616056566" Sep 3 23:25:44.219123 containerd[2014]: time="2025-09-03T23:25:44.219056253Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:25:44.223968 systemd[1]: cri-containerd-77185afc51275585dbefee5b65f4ead053069d770b228e65846081195b1a29f2.scope: Deactivated successfully. Sep 3 23:25:44.226907 systemd[1]: cri-containerd-77185afc51275585dbefee5b65f4ead053069d770b228e65846081195b1a29f2.scope: Consumed 958ms CPU time, 186.3M memory peak, 165.8M written to disk. Sep 3 23:25:44.230716 containerd[2014]: time="2025-09-03T23:25:44.230620257Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77185afc51275585dbefee5b65f4ead053069d770b228e65846081195b1a29f2\" id:\"77185afc51275585dbefee5b65f4ead053069d770b228e65846081195b1a29f2\" pid:4292 exited_at:{seconds:1756941944 nanos:230208885}" Sep 3 23:25:44.231141 containerd[2014]: time="2025-09-03T23:25:44.230670261Z" level=info msg="received exit event container_id:\"77185afc51275585dbefee5b65f4ead053069d770b228e65846081195b1a29f2\" id:\"77185afc51275585dbefee5b65f4ead053069d770b228e65846081195b1a29f2\" pid:4292 exited_at:{seconds:1756941944 nanos:230208885}" Sep 3 23:25:44.234864 kubelet[3309]: I0903 23:25:44.234766 3309 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 3 23:25:44.301476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77185afc51275585dbefee5b65f4ead053069d770b228e65846081195b1a29f2-rootfs.mount: Deactivated successfully. Sep 3 23:25:44.342693 systemd[1]: Created slice kubepods-burstable-pod6d7feb2e_230f_4abe_b461_c5642b8b4b42.slice - libcontainer container kubepods-burstable-pod6d7feb2e_230f_4abe_b461_c5642b8b4b42.slice. Sep 3 23:25:44.385055 systemd[1]: Created slice kubepods-besteffort-pod6ddbd58e_e872_4cbb_9c4d_a597e72572da.slice - libcontainer container kubepods-besteffort-pod6ddbd58e_e872_4cbb_9c4d_a597e72572da.slice. Sep 3 23:25:44.416209 systemd[1]: Created slice kubepods-besteffort-pod159ac0cf_7cf5_42c9_83e3_bcd858a040ef.slice - libcontainer container kubepods-besteffort-pod159ac0cf_7cf5_42c9_83e3_bcd858a040ef.slice. Sep 3 23:25:44.442812 kubelet[3309]: I0903 23:25:44.442073 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9j9z\" (UniqueName: \"kubernetes.io/projected/6d7feb2e-230f-4abe-b461-c5642b8b4b42-kube-api-access-h9j9z\") pod \"coredns-674b8bbfcf-kqbc9\" (UID: \"6d7feb2e-230f-4abe-b461-c5642b8b4b42\") " pod="kube-system/coredns-674b8bbfcf-kqbc9" Sep 3 23:25:44.449185 kubelet[3309]: I0903 23:25:44.449117 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9j7r\" (UniqueName: \"kubernetes.io/projected/6ddbd58e-e872-4cbb-9c4d-a597e72572da-kube-api-access-d9j7r\") pod \"whisker-7fcf77fc8c-dpvcx\" (UID: \"6ddbd58e-e872-4cbb-9c4d-a597e72572da\") " pod="calico-system/whisker-7fcf77fc8c-dpvcx" Sep 3 23:25:44.449345 kubelet[3309]: I0903 23:25:44.449304 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d7feb2e-230f-4abe-b461-c5642b8b4b42-config-volume\") pod \"coredns-674b8bbfcf-kqbc9\" (UID: \"6d7feb2e-230f-4abe-b461-c5642b8b4b42\") " pod="kube-system/coredns-674b8bbfcf-kqbc9" Sep 3 23:25:44.449405 kubelet[3309]: I0903 23:25:44.449359 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/159ac0cf-7cf5-42c9-83e3-bcd858a040ef-tigera-ca-bundle\") pod \"calico-kube-controllers-6987d979c4-jmhf4\" (UID: \"159ac0cf-7cf5-42c9-83e3-bcd858a040ef\") " pod="calico-system/calico-kube-controllers-6987d979c4-jmhf4" Sep 3 23:25:44.449473 kubelet[3309]: I0903 23:25:44.449445 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcjnx\" (UniqueName: \"kubernetes.io/projected/159ac0cf-7cf5-42c9-83e3-bcd858a040ef-kube-api-access-kcjnx\") pod \"calico-kube-controllers-6987d979c4-jmhf4\" (UID: \"159ac0cf-7cf5-42c9-83e3-bcd858a040ef\") " pod="calico-system/calico-kube-controllers-6987d979c4-jmhf4" Sep 3 23:25:44.449539 kubelet[3309]: I0903 23:25:44.449507 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6ddbd58e-e872-4cbb-9c4d-a597e72572da-whisker-backend-key-pair\") pod \"whisker-7fcf77fc8c-dpvcx\" (UID: \"6ddbd58e-e872-4cbb-9c4d-a597e72572da\") " pod="calico-system/whisker-7fcf77fc8c-dpvcx" Sep 3 23:25:44.449606 kubelet[3309]: I0903 23:25:44.449568 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ddbd58e-e872-4cbb-9c4d-a597e72572da-whisker-ca-bundle\") pod \"whisker-7fcf77fc8c-dpvcx\" (UID: \"6ddbd58e-e872-4cbb-9c4d-a597e72572da\") " pod="calico-system/whisker-7fcf77fc8c-dpvcx" Sep 3 23:25:44.450523 systemd[1]: Created slice kubepods-besteffort-pod53a54f96_059e_4d90_a05b_e02c5f4b3458.slice - libcontainer container kubepods-besteffort-pod53a54f96_059e_4d90_a05b_e02c5f4b3458.slice. Sep 3 23:25:44.492350 systemd[1]: Created slice kubepods-besteffort-podd56f78d9_c283_486e_aa03_f318d825ccff.slice - libcontainer container kubepods-besteffort-podd56f78d9_c283_486e_aa03_f318d825ccff.slice. Sep 3 23:25:44.517633 systemd[1]: Created slice kubepods-besteffort-podc9ca8be5_0cc5_4a2a_907d_00ff707fe983.slice - libcontainer container kubepods-besteffort-podc9ca8be5_0cc5_4a2a_907d_00ff707fe983.slice. Sep 3 23:25:44.537156 systemd[1]: Created slice kubepods-burstable-podea7e887c_2b17_4fdd_a3d8_04ca65bca8af.slice - libcontainer container kubepods-burstable-podea7e887c_2b17_4fdd_a3d8_04ca65bca8af.slice. Sep 3 23:25:44.550074 kubelet[3309]: I0903 23:25:44.549999 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/53a54f96-059e-4d90-a05b-e02c5f4b3458-calico-apiserver-certs\") pod \"calico-apiserver-68c598757d-7xvxl\" (UID: \"53a54f96-059e-4d90-a05b-e02c5f4b3458\") " pod="calico-apiserver/calico-apiserver-68c598757d-7xvxl" Sep 3 23:25:44.550462 kubelet[3309]: I0903 23:25:44.550365 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr2c8\" (UniqueName: \"kubernetes.io/projected/c9ca8be5-0cc5-4a2a-907d-00ff707fe983-kube-api-access-fr2c8\") pod \"calico-apiserver-68c598757d-96rvh\" (UID: \"c9ca8be5-0cc5-4a2a-907d-00ff707fe983\") " pod="calico-apiserver/calico-apiserver-68c598757d-96rvh" Sep 3 23:25:44.550755 kubelet[3309]: I0903 23:25:44.550641 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kxkd\" (UniqueName: \"kubernetes.io/projected/d56f78d9-c283-486e-aa03-f318d825ccff-kube-api-access-2kxkd\") pod \"goldmane-54d579b49d-rw2cj\" (UID: \"d56f78d9-c283-486e-aa03-f318d825ccff\") " pod="calico-system/goldmane-54d579b49d-rw2cj" Sep 3 23:25:44.550755 kubelet[3309]: I0903 23:25:44.550722 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c9ca8be5-0cc5-4a2a-907d-00ff707fe983-calico-apiserver-certs\") pod \"calico-apiserver-68c598757d-96rvh\" (UID: \"c9ca8be5-0cc5-4a2a-907d-00ff707fe983\") " pod="calico-apiserver/calico-apiserver-68c598757d-96rvh" Sep 3 23:25:44.551112 kubelet[3309]: I0903 23:25:44.551052 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjv6g\" (UniqueName: \"kubernetes.io/projected/53a54f96-059e-4d90-a05b-e02c5f4b3458-kube-api-access-wjv6g\") pod \"calico-apiserver-68c598757d-7xvxl\" (UID: \"53a54f96-059e-4d90-a05b-e02c5f4b3458\") " pod="calico-apiserver/calico-apiserver-68c598757d-7xvxl" Sep 3 23:25:44.551251 kubelet[3309]: I0903 23:25:44.551223 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56f78d9-c283-486e-aa03-f318d825ccff-config\") pod \"goldmane-54d579b49d-rw2cj\" (UID: \"d56f78d9-c283-486e-aa03-f318d825ccff\") " pod="calico-system/goldmane-54d579b49d-rw2cj" Sep 3 23:25:44.551638 kubelet[3309]: I0903 23:25:44.551552 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea7e887c-2b17-4fdd-a3d8-04ca65bca8af-config-volume\") pod \"coredns-674b8bbfcf-mjc96\" (UID: \"ea7e887c-2b17-4fdd-a3d8-04ca65bca8af\") " pod="kube-system/coredns-674b8bbfcf-mjc96" Sep 3 23:25:44.551776 kubelet[3309]: I0903 23:25:44.551749 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d56f78d9-c283-486e-aa03-f318d825ccff-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-rw2cj\" (UID: \"d56f78d9-c283-486e-aa03-f318d825ccff\") " pod="calico-system/goldmane-54d579b49d-rw2cj" Sep 3 23:25:44.552077 kubelet[3309]: I0903 23:25:44.551998 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d56f78d9-c283-486e-aa03-f318d825ccff-goldmane-key-pair\") pod \"goldmane-54d579b49d-rw2cj\" (UID: \"d56f78d9-c283-486e-aa03-f318d825ccff\") " pod="calico-system/goldmane-54d579b49d-rw2cj" Sep 3 23:25:44.552298 kubelet[3309]: I0903 23:25:44.552058 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4tnv\" (UniqueName: \"kubernetes.io/projected/ea7e887c-2b17-4fdd-a3d8-04ca65bca8af-kube-api-access-q4tnv\") pod \"coredns-674b8bbfcf-mjc96\" (UID: \"ea7e887c-2b17-4fdd-a3d8-04ca65bca8af\") " pod="kube-system/coredns-674b8bbfcf-mjc96" Sep 3 23:25:44.679150 containerd[2014]: time="2025-09-03T23:25:44.679045391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kqbc9,Uid:6d7feb2e-230f-4abe-b461-c5642b8b4b42,Namespace:kube-system,Attempt:0,}" Sep 3 23:25:44.708857 containerd[2014]: time="2025-09-03T23:25:44.707647403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fcf77fc8c-dpvcx,Uid:6ddbd58e-e872-4cbb-9c4d-a597e72572da,Namespace:calico-system,Attempt:0,}" Sep 3 23:25:44.727345 containerd[2014]: time="2025-09-03T23:25:44.727256063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6987d979c4-jmhf4,Uid:159ac0cf-7cf5-42c9-83e3-bcd858a040ef,Namespace:calico-system,Attempt:0,}" Sep 3 23:25:44.783129 containerd[2014]: time="2025-09-03T23:25:44.782381112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c598757d-7xvxl,Uid:53a54f96-059e-4d90-a05b-e02c5f4b3458,Namespace:calico-apiserver,Attempt:0,}" Sep 3 23:25:44.808527 containerd[2014]: time="2025-09-03T23:25:44.808327464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rw2cj,Uid:d56f78d9-c283-486e-aa03-f318d825ccff,Namespace:calico-system,Attempt:0,}" Sep 3 23:25:44.833243 containerd[2014]: time="2025-09-03T23:25:44.833050152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c598757d-96rvh,Uid:c9ca8be5-0cc5-4a2a-907d-00ff707fe983,Namespace:calico-apiserver,Attempt:0,}" Sep 3 23:25:44.845734 containerd[2014]: time="2025-09-03T23:25:44.845686884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mjc96,Uid:ea7e887c-2b17-4fdd-a3d8-04ca65bca8af,Namespace:kube-system,Attempt:0,}" Sep 3 23:25:45.058080 containerd[2014]: time="2025-09-03T23:25:45.057822225Z" level=error msg="Failed to destroy network for sandbox \"f6762a704b95a74391bd285a6ca74bf101745081ebe08fa0ab45b5e430675c98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.099154 systemd[1]: Created slice kubepods-besteffort-pode6027470_9212_46bc_a2f0_6968361afd03.slice - libcontainer container kubepods-besteffort-pode6027470_9212_46bc_a2f0_6968361afd03.slice. Sep 3 23:25:45.106175 containerd[2014]: time="2025-09-03T23:25:45.106085757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w22l4,Uid:e6027470-9212-46bc-a2f0-6968361afd03,Namespace:calico-system,Attempt:0,}" Sep 3 23:25:45.156465 containerd[2014]: time="2025-09-03T23:25:45.156395649Z" level=error msg="Failed to destroy network for sandbox \"9f48c48e45f0f47590c992360e8ae59e76caa9c8b287c1f27bffa707fdddb436\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.501113 containerd[2014]: time="2025-09-03T23:25:45.499549355Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kqbc9,Uid:6d7feb2e-230f-4abe-b461-c5642b8b4b42,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6762a704b95a74391bd285a6ca74bf101745081ebe08fa0ab45b5e430675c98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.504387 kubelet[3309]: E0903 23:25:45.503960 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6762a704b95a74391bd285a6ca74bf101745081ebe08fa0ab45b5e430675c98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.504387 kubelet[3309]: E0903 23:25:45.504125 3309 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6762a704b95a74391bd285a6ca74bf101745081ebe08fa0ab45b5e430675c98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kqbc9" Sep 3 23:25:45.504387 kubelet[3309]: E0903 23:25:45.504190 3309 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6762a704b95a74391bd285a6ca74bf101745081ebe08fa0ab45b5e430675c98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kqbc9" Sep 3 23:25:45.506078 kubelet[3309]: E0903 23:25:45.504298 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-kqbc9_kube-system(6d7feb2e-230f-4abe-b461-c5642b8b4b42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-kqbc9_kube-system(6d7feb2e-230f-4abe-b461-c5642b8b4b42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6762a704b95a74391bd285a6ca74bf101745081ebe08fa0ab45b5e430675c98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kqbc9" podUID="6d7feb2e-230f-4abe-b461-c5642b8b4b42" Sep 3 23:25:45.523290 containerd[2014]: time="2025-09-03T23:25:45.522340751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fcf77fc8c-dpvcx,Uid:6ddbd58e-e872-4cbb-9c4d-a597e72572da,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f48c48e45f0f47590c992360e8ae59e76caa9c8b287c1f27bffa707fdddb436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.525964 kubelet[3309]: E0903 23:25:45.524675 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f48c48e45f0f47590c992360e8ae59e76caa9c8b287c1f27bffa707fdddb436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.525964 kubelet[3309]: E0903 23:25:45.524860 3309 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f48c48e45f0f47590c992360e8ae59e76caa9c8b287c1f27bffa707fdddb436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fcf77fc8c-dpvcx" Sep 3 23:25:45.525964 kubelet[3309]: E0903 23:25:45.524924 3309 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f48c48e45f0f47590c992360e8ae59e76caa9c8b287c1f27bffa707fdddb436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fcf77fc8c-dpvcx" Sep 3 23:25:45.526297 kubelet[3309]: E0903 23:25:45.525030 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7fcf77fc8c-dpvcx_calico-system(6ddbd58e-e872-4cbb-9c4d-a597e72572da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7fcf77fc8c-dpvcx_calico-system(6ddbd58e-e872-4cbb-9c4d-a597e72572da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f48c48e45f0f47590c992360e8ae59e76caa9c8b287c1f27bffa707fdddb436\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7fcf77fc8c-dpvcx" podUID="6ddbd58e-e872-4cbb-9c4d-a597e72572da" Sep 3 23:25:45.743544 kubelet[3309]: I0903 23:25:45.743496 3309 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 3 23:25:45.895951 containerd[2014]: time="2025-09-03T23:25:45.895157545Z" level=error msg="Failed to destroy network for sandbox \"8206e25502b1dab4202c2a6845e3134858c84780806033f97122e0968f3c1572\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.904638 systemd[1]: run-netns-cni\x2d66774f02\x2dbdb5\x2de86b\x2d8f52\x2d05ad58d78598.mount: Deactivated successfully. Sep 3 23:25:45.912721 containerd[2014]: time="2025-09-03T23:25:45.908311369Z" level=error msg="Failed to destroy network for sandbox \"8130b2fe669fb185d0c7ff5705ba70ea55d025cdb777ed9cd0ef55d116907b15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.917282 systemd[1]: run-netns-cni\x2d52e4b457\x2d2a4a\x2de092\x2d4c8b\x2dd61379f25d67.mount: Deactivated successfully. Sep 3 23:25:45.927383 containerd[2014]: time="2025-09-03T23:25:45.926755993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6987d979c4-jmhf4,Uid:159ac0cf-7cf5-42c9-83e3-bcd858a040ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8206e25502b1dab4202c2a6845e3134858c84780806033f97122e0968f3c1572\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.930530 kubelet[3309]: E0903 23:25:45.930408 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8206e25502b1dab4202c2a6845e3134858c84780806033f97122e0968f3c1572\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.930530 kubelet[3309]: E0903 23:25:45.930522 3309 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8206e25502b1dab4202c2a6845e3134858c84780806033f97122e0968f3c1572\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6987d979c4-jmhf4" Sep 3 23:25:45.931192 kubelet[3309]: E0903 23:25:45.930562 3309 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8206e25502b1dab4202c2a6845e3134858c84780806033f97122e0968f3c1572\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6987d979c4-jmhf4" Sep 3 23:25:45.931192 kubelet[3309]: E0903 23:25:45.930657 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6987d979c4-jmhf4_calico-system(159ac0cf-7cf5-42c9-83e3-bcd858a040ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6987d979c4-jmhf4_calico-system(159ac0cf-7cf5-42c9-83e3-bcd858a040ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8206e25502b1dab4202c2a6845e3134858c84780806033f97122e0968f3c1572\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6987d979c4-jmhf4" podUID="159ac0cf-7cf5-42c9-83e3-bcd858a040ef" Sep 3 23:25:45.932836 containerd[2014]: time="2025-09-03T23:25:45.932223781Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rw2cj,Uid:d56f78d9-c283-486e-aa03-f318d825ccff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8130b2fe669fb185d0c7ff5705ba70ea55d025cdb777ed9cd0ef55d116907b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.933374 containerd[2014]: time="2025-09-03T23:25:45.933118813Z" level=error msg="Failed to destroy network for sandbox \"0034dd5885c787f520438a9bb6495198b43b00edec606ee0e00a55bc84e8e340\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.934833 kubelet[3309]: E0903 23:25:45.934601 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8130b2fe669fb185d0c7ff5705ba70ea55d025cdb777ed9cd0ef55d116907b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.935095 kubelet[3309]: E0903 23:25:45.935027 3309 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8130b2fe669fb185d0c7ff5705ba70ea55d025cdb777ed9cd0ef55d116907b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-rw2cj" Sep 3 23:25:45.935095 kubelet[3309]: E0903 23:25:45.935077 3309 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8130b2fe669fb185d0c7ff5705ba70ea55d025cdb777ed9cd0ef55d116907b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-rw2cj" Sep 3 23:25:45.935291 kubelet[3309]: E0903 23:25:45.935172 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-rw2cj_calico-system(d56f78d9-c283-486e-aa03-f318d825ccff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-rw2cj_calico-system(d56f78d9-c283-486e-aa03-f318d825ccff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8130b2fe669fb185d0c7ff5705ba70ea55d025cdb777ed9cd0ef55d116907b15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-rw2cj" podUID="d56f78d9-c283-486e-aa03-f318d825ccff" Sep 3 23:25:45.939186 containerd[2014]: time="2025-09-03T23:25:45.938996029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c598757d-7xvxl,Uid:53a54f96-059e-4d90-a05b-e02c5f4b3458,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0034dd5885c787f520438a9bb6495198b43b00edec606ee0e00a55bc84e8e340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.940933 kubelet[3309]: E0903 23:25:45.940854 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0034dd5885c787f520438a9bb6495198b43b00edec606ee0e00a55bc84e8e340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.941267 kubelet[3309]: E0903 23:25:45.941117 3309 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0034dd5885c787f520438a9bb6495198b43b00edec606ee0e00a55bc84e8e340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c598757d-7xvxl" Sep 3 23:25:45.941267 kubelet[3309]: E0903 23:25:45.941186 3309 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0034dd5885c787f520438a9bb6495198b43b00edec606ee0e00a55bc84e8e340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c598757d-7xvxl" Sep 3 23:25:45.941539 kubelet[3309]: E0903 23:25:45.941442 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c598757d-7xvxl_calico-apiserver(53a54f96-059e-4d90-a05b-e02c5f4b3458)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c598757d-7xvxl_calico-apiserver(53a54f96-059e-4d90-a05b-e02c5f4b3458)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0034dd5885c787f520438a9bb6495198b43b00edec606ee0e00a55bc84e8e340\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68c598757d-7xvxl" podUID="53a54f96-059e-4d90-a05b-e02c5f4b3458" Sep 3 23:25:45.950340 containerd[2014]: time="2025-09-03T23:25:45.949983025Z" level=error msg="Failed to destroy network for sandbox \"0a7391cdb3c72061be2041419f3ff7cc03027f45795d19677a9d0bfd5cc0c56a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.958325 containerd[2014]: time="2025-09-03T23:25:45.957997513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c598757d-96rvh,Uid:c9ca8be5-0cc5-4a2a-907d-00ff707fe983,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7391cdb3c72061be2041419f3ff7cc03027f45795d19677a9d0bfd5cc0c56a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.960807 kubelet[3309]: E0903 23:25:45.960564 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7391cdb3c72061be2041419f3ff7cc03027f45795d19677a9d0bfd5cc0c56a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.961437 kubelet[3309]: E0903 23:25:45.961359 3309 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7391cdb3c72061be2041419f3ff7cc03027f45795d19677a9d0bfd5cc0c56a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c598757d-96rvh" Sep 3 23:25:45.961929 kubelet[3309]: E0903 23:25:45.961621 3309 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7391cdb3c72061be2041419f3ff7cc03027f45795d19677a9d0bfd5cc0c56a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c598757d-96rvh" Sep 3 23:25:45.962575 kubelet[3309]: E0903 23:25:45.962407 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c598757d-96rvh_calico-apiserver(c9ca8be5-0cc5-4a2a-907d-00ff707fe983)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c598757d-96rvh_calico-apiserver(c9ca8be5-0cc5-4a2a-907d-00ff707fe983)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a7391cdb3c72061be2041419f3ff7cc03027f45795d19677a9d0bfd5cc0c56a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68c598757d-96rvh" podUID="c9ca8be5-0cc5-4a2a-907d-00ff707fe983" Sep 3 23:25:45.978649 containerd[2014]: time="2025-09-03T23:25:45.978396986Z" level=error msg="Failed to destroy network for sandbox \"b543d2f74e6af9321030c344baf2e53661ba246a67f8e6ef043d8779d28c6fb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.983139 containerd[2014]: time="2025-09-03T23:25:45.983048282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mjc96,Uid:ea7e887c-2b17-4fdd-a3d8-04ca65bca8af,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b543d2f74e6af9321030c344baf2e53661ba246a67f8e6ef043d8779d28c6fb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.984385 kubelet[3309]: E0903 23:25:45.984028 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b543d2f74e6af9321030c344baf2e53661ba246a67f8e6ef043d8779d28c6fb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.984385 kubelet[3309]: E0903 23:25:45.984120 3309 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b543d2f74e6af9321030c344baf2e53661ba246a67f8e6ef043d8779d28c6fb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mjc96" Sep 3 23:25:45.984385 kubelet[3309]: E0903 23:25:45.984157 3309 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b543d2f74e6af9321030c344baf2e53661ba246a67f8e6ef043d8779d28c6fb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mjc96" Sep 3 23:25:45.986060 kubelet[3309]: E0903 23:25:45.984233 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mjc96_kube-system(ea7e887c-2b17-4fdd-a3d8-04ca65bca8af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mjc96_kube-system(ea7e887c-2b17-4fdd-a3d8-04ca65bca8af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b543d2f74e6af9321030c344baf2e53661ba246a67f8e6ef043d8779d28c6fb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mjc96" podUID="ea7e887c-2b17-4fdd-a3d8-04ca65bca8af" Sep 3 23:25:45.993335 containerd[2014]: time="2025-09-03T23:25:45.993184934Z" level=error msg="Failed to destroy network for sandbox \"d806c10cef9e1cebaa36688a37e6e1a5aec504bdbde40db439ed27d0dc1aa44e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.996472 containerd[2014]: time="2025-09-03T23:25:45.996295994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w22l4,Uid:e6027470-9212-46bc-a2f0-6968361afd03,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d806c10cef9e1cebaa36688a37e6e1a5aec504bdbde40db439ed27d0dc1aa44e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.997212 kubelet[3309]: E0903 23:25:45.997102 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d806c10cef9e1cebaa36688a37e6e1a5aec504bdbde40db439ed27d0dc1aa44e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:25:45.997818 kubelet[3309]: E0903 23:25:45.997355 3309 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d806c10cef9e1cebaa36688a37e6e1a5aec504bdbde40db439ed27d0dc1aa44e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w22l4" Sep 3 23:25:45.997818 kubelet[3309]: E0903 23:25:45.997446 3309 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d806c10cef9e1cebaa36688a37e6e1a5aec504bdbde40db439ed27d0dc1aa44e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w22l4" Sep 3 23:25:45.997818 kubelet[3309]: E0903 23:25:45.997575 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w22l4_calico-system(e6027470-9212-46bc-a2f0-6968361afd03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w22l4_calico-system(e6027470-9212-46bc-a2f0-6968361afd03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d806c10cef9e1cebaa36688a37e6e1a5aec504bdbde40db439ed27d0dc1aa44e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w22l4" podUID="e6027470-9212-46bc-a2f0-6968361afd03" Sep 3 23:25:46.298466 systemd[1]: run-netns-cni\x2dfa2bceea\x2dd4e4\x2d26c4\x2d89b2\x2db2dc2aea1b98.mount: Deactivated successfully. Sep 3 23:25:46.298861 systemd[1]: run-netns-cni\x2d3613038d\x2d6f3c\x2d6d87\x2d2d3e\x2d9af92168846f.mount: Deactivated successfully. Sep 3 23:25:46.299112 systemd[1]: run-netns-cni\x2dbb3f3ec2\x2d294f\x2d600a\x2d64f1\x2d90d57d67e240.mount: Deactivated successfully. Sep 3 23:25:46.299328 systemd[1]: run-netns-cni\x2d52e72e27\x2dc539\x2d68c2\x2d92f6\x2dcc0056ac8a7d.mount: Deactivated successfully. Sep 3 23:25:46.351012 containerd[2014]: time="2025-09-03T23:25:46.350813483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 3 23:25:52.459067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2280747774.mount: Deactivated successfully. Sep 3 23:25:52.534482 containerd[2014]: time="2025-09-03T23:25:52.534408654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:52.536608 containerd[2014]: time="2025-09-03T23:25:52.536544018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 3 23:25:52.539530 containerd[2014]: time="2025-09-03T23:25:52.539468826Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:52.544740 containerd[2014]: time="2025-09-03T23:25:52.544610310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:52.546048 containerd[2014]: time="2025-09-03T23:25:52.545986170Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 6.195109375s" Sep 3 23:25:52.546158 containerd[2014]: time="2025-09-03T23:25:52.546045582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 3 23:25:52.589576 containerd[2014]: time="2025-09-03T23:25:52.589518930Z" level=info msg="CreateContainer within sandbox \"537de6898956845d336d6a131f32d0ef932d770a0dca5fe69633571599da7ccf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 3 23:25:52.612241 containerd[2014]: time="2025-09-03T23:25:52.612140539Z" level=info msg="Container 691dae3a3f7536d59f269fe32611169838db9ba904e99a37a916e78ff3fd4a48: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:52.634804 containerd[2014]: time="2025-09-03T23:25:52.634708243Z" level=info msg="CreateContainer within sandbox \"537de6898956845d336d6a131f32d0ef932d770a0dca5fe69633571599da7ccf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"691dae3a3f7536d59f269fe32611169838db9ba904e99a37a916e78ff3fd4a48\"" Sep 3 23:25:52.636134 containerd[2014]: time="2025-09-03T23:25:52.635934487Z" level=info msg="StartContainer for \"691dae3a3f7536d59f269fe32611169838db9ba904e99a37a916e78ff3fd4a48\"" Sep 3 23:25:52.639466 containerd[2014]: time="2025-09-03T23:25:52.639403975Z" level=info msg="connecting to shim 691dae3a3f7536d59f269fe32611169838db9ba904e99a37a916e78ff3fd4a48" address="unix:///run/containerd/s/9a0520507a1adc54172e8064f727b8e5a850a764576e3e3d8679563e81546797" protocol=ttrpc version=3 Sep 3 23:25:52.682108 systemd[1]: Started cri-containerd-691dae3a3f7536d59f269fe32611169838db9ba904e99a37a916e78ff3fd4a48.scope - libcontainer container 691dae3a3f7536d59f269fe32611169838db9ba904e99a37a916e78ff3fd4a48. Sep 3 23:25:52.780761 containerd[2014]: time="2025-09-03T23:25:52.780515407Z" level=info msg="StartContainer for \"691dae3a3f7536d59f269fe32611169838db9ba904e99a37a916e78ff3fd4a48\" returns successfully" Sep 3 23:25:53.031909 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 3 23:25:53.032048 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 3 23:25:53.340927 kubelet[3309]: I0903 23:25:53.340868 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ddbd58e-e872-4cbb-9c4d-a597e72572da-whisker-ca-bundle\") pod \"6ddbd58e-e872-4cbb-9c4d-a597e72572da\" (UID: \"6ddbd58e-e872-4cbb-9c4d-a597e72572da\") " Sep 3 23:25:53.341503 kubelet[3309]: I0903 23:25:53.340950 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9j7r\" (UniqueName: \"kubernetes.io/projected/6ddbd58e-e872-4cbb-9c4d-a597e72572da-kube-api-access-d9j7r\") pod \"6ddbd58e-e872-4cbb-9c4d-a597e72572da\" (UID: \"6ddbd58e-e872-4cbb-9c4d-a597e72572da\") " Sep 3 23:25:53.341503 kubelet[3309]: I0903 23:25:53.340994 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6ddbd58e-e872-4cbb-9c4d-a597e72572da-whisker-backend-key-pair\") pod \"6ddbd58e-e872-4cbb-9c4d-a597e72572da\" (UID: \"6ddbd58e-e872-4cbb-9c4d-a597e72572da\") " Sep 3 23:25:53.342679 kubelet[3309]: I0903 23:25:53.342607 3309 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ddbd58e-e872-4cbb-9c4d-a597e72572da-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6ddbd58e-e872-4cbb-9c4d-a597e72572da" (UID: "6ddbd58e-e872-4cbb-9c4d-a597e72572da"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 3 23:25:53.352489 kubelet[3309]: I0903 23:25:53.352412 3309 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ddbd58e-e872-4cbb-9c4d-a597e72572da-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6ddbd58e-e872-4cbb-9c4d-a597e72572da" (UID: "6ddbd58e-e872-4cbb-9c4d-a597e72572da"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 3 23:25:53.358147 kubelet[3309]: I0903 23:25:53.358074 3309 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ddbd58e-e872-4cbb-9c4d-a597e72572da-kube-api-access-d9j7r" (OuterVolumeSpecName: "kube-api-access-d9j7r") pod "6ddbd58e-e872-4cbb-9c4d-a597e72572da" (UID: "6ddbd58e-e872-4cbb-9c4d-a597e72572da"). InnerVolumeSpecName "kube-api-access-d9j7r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 3 23:25:53.425262 systemd[1]: Removed slice kubepods-besteffort-pod6ddbd58e_e872_4cbb_9c4d_a597e72572da.slice - libcontainer container kubepods-besteffort-pod6ddbd58e_e872_4cbb_9c4d_a597e72572da.slice. Sep 3 23:25:53.443145 kubelet[3309]: I0903 23:25:53.443089 3309 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ddbd58e-e872-4cbb-9c4d-a597e72572da-whisker-ca-bundle\") on node \"ip-172-31-22-232\" DevicePath \"\"" Sep 3 23:25:53.443145 kubelet[3309]: I0903 23:25:53.443141 3309 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d9j7r\" (UniqueName: \"kubernetes.io/projected/6ddbd58e-e872-4cbb-9c4d-a597e72572da-kube-api-access-d9j7r\") on node \"ip-172-31-22-232\" DevicePath \"\"" Sep 3 23:25:53.444686 kubelet[3309]: I0903 23:25:53.443171 3309 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6ddbd58e-e872-4cbb-9c4d-a597e72572da-whisker-backend-key-pair\") on node \"ip-172-31-22-232\" DevicePath \"\"" Sep 3 23:25:53.458125 systemd[1]: var-lib-kubelet-pods-6ddbd58e\x2de872\x2d4cbb\x2d9c4d\x2da597e72572da-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd9j7r.mount: Deactivated successfully. Sep 3 23:25:53.458384 systemd[1]: var-lib-kubelet-pods-6ddbd58e\x2de872\x2d4cbb\x2d9c4d\x2da597e72572da-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 3 23:25:53.462697 kubelet[3309]: I0903 23:25:53.461881 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g6jr4" podStartSLOduration=2.109559498 podStartE2EDuration="18.461857807s" podCreationTimestamp="2025-09-03 23:25:35 +0000 UTC" firstStartedPulling="2025-09-03 23:25:36.195133885 +0000 UTC m=+28.445561398" lastFinishedPulling="2025-09-03 23:25:52.547432194 +0000 UTC m=+44.797859707" observedRunningTime="2025-09-03 23:25:53.457888987 +0000 UTC m=+45.708316596" watchObservedRunningTime="2025-09-03 23:25:53.461857807 +0000 UTC m=+45.712285356" Sep 3 23:25:53.603204 systemd[1]: Created slice kubepods-besteffort-pod0b4312da_b2f2_4331_93d3_225874a7b63c.slice - libcontainer container kubepods-besteffort-pod0b4312da_b2f2_4331_93d3_225874a7b63c.slice. Sep 3 23:25:53.745770 kubelet[3309]: I0903 23:25:53.745648 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q97ss\" (UniqueName: \"kubernetes.io/projected/0b4312da-b2f2-4331-93d3-225874a7b63c-kube-api-access-q97ss\") pod \"whisker-7fb5b49f78-wvrp2\" (UID: \"0b4312da-b2f2-4331-93d3-225874a7b63c\") " pod="calico-system/whisker-7fb5b49f78-wvrp2" Sep 3 23:25:53.745929 kubelet[3309]: I0903 23:25:53.745857 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b4312da-b2f2-4331-93d3-225874a7b63c-whisker-ca-bundle\") pod \"whisker-7fb5b49f78-wvrp2\" (UID: \"0b4312da-b2f2-4331-93d3-225874a7b63c\") " pod="calico-system/whisker-7fb5b49f78-wvrp2" Sep 3 23:25:53.745929 kubelet[3309]: I0903 23:25:53.745908 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0b4312da-b2f2-4331-93d3-225874a7b63c-whisker-backend-key-pair\") pod \"whisker-7fb5b49f78-wvrp2\" (UID: \"0b4312da-b2f2-4331-93d3-225874a7b63c\") " pod="calico-system/whisker-7fb5b49f78-wvrp2" Sep 3 23:25:53.832527 containerd[2014]: time="2025-09-03T23:25:53.832175985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"691dae3a3f7536d59f269fe32611169838db9ba904e99a37a916e78ff3fd4a48\" id:\"e975d239e82ddd12cf5a4f9d91426df9c043df9b95b7d7dbe2dfaf37144242fa\" pid:4613 exit_status:1 exited_at:{seconds:1756941953 nanos:830636841}" Sep 3 23:25:53.910549 containerd[2014]: time="2025-09-03T23:25:53.910422561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fb5b49f78-wvrp2,Uid:0b4312da-b2f2-4331-93d3-225874a7b63c,Namespace:calico-system,Attempt:0,}" Sep 3 23:25:54.091360 kubelet[3309]: I0903 23:25:54.091297 3309 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ddbd58e-e872-4cbb-9c4d-a597e72572da" path="/var/lib/kubelet/pods/6ddbd58e-e872-4cbb-9c4d-a597e72572da/volumes" Sep 3 23:25:54.219497 (udev-worker)[4584]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:25:54.222583 systemd-networkd[1894]: cali98e75b1e8d3: Link UP Sep 3 23:25:54.224616 systemd-networkd[1894]: cali98e75b1e8d3: Gained carrier Sep 3 23:25:54.263818 containerd[2014]: 2025-09-03 23:25:53.961 [INFO][4639] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 3 23:25:54.263818 containerd[2014]: 2025-09-03 23:25:54.041 [INFO][4639] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--232-k8s-whisker--7fb5b49f78--wvrp2-eth0 whisker-7fb5b49f78- calico-system 0b4312da-b2f2-4331-93d3-225874a7b63c 901 0 2025-09-03 23:25:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7fb5b49f78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-22-232 whisker-7fb5b49f78-wvrp2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali98e75b1e8d3 [] [] }} ContainerID="82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" Namespace="calico-system" Pod="whisker-7fb5b49f78-wvrp2" WorkloadEndpoint="ip--172--31--22--232-k8s-whisker--7fb5b49f78--wvrp2-" Sep 3 23:25:54.263818 containerd[2014]: 2025-09-03 23:25:54.042 [INFO][4639] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" Namespace="calico-system" Pod="whisker-7fb5b49f78-wvrp2" WorkloadEndpoint="ip--172--31--22--232-k8s-whisker--7fb5b49f78--wvrp2-eth0" Sep 3 23:25:54.263818 containerd[2014]: 2025-09-03 23:25:54.131 [INFO][4650] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" HandleID="k8s-pod-network.82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" Workload="ip--172--31--22--232-k8s-whisker--7fb5b49f78--wvrp2-eth0" Sep 3 23:25:54.264153 containerd[2014]: 2025-09-03 23:25:54.131 [INFO][4650] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" HandleID="k8s-pod-network.82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" Workload="ip--172--31--22--232-k8s-whisker--7fb5b49f78--wvrp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032b7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-232", "pod":"whisker-7fb5b49f78-wvrp2", "timestamp":"2025-09-03 23:25:54.131504298 +0000 UTC"}, Hostname:"ip-172-31-22-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:25:54.264153 containerd[2014]: 2025-09-03 23:25:54.131 [INFO][4650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:25:54.264153 containerd[2014]: 2025-09-03 23:25:54.132 [INFO][4650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:25:54.264153 containerd[2014]: 2025-09-03 23:25:54.132 [INFO][4650] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-232' Sep 3 23:25:54.264153 containerd[2014]: 2025-09-03 23:25:54.149 [INFO][4650] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" host="ip-172-31-22-232" Sep 3 23:25:54.264153 containerd[2014]: 2025-09-03 23:25:54.161 [INFO][4650] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-232" Sep 3 23:25:54.264153 containerd[2014]: 2025-09-03 23:25:54.169 [INFO][4650] ipam/ipam.go 511: Trying affinity for 192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:25:54.264153 containerd[2014]: 2025-09-03 23:25:54.173 [INFO][4650] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:25:54.264153 containerd[2014]: 2025-09-03 23:25:54.177 [INFO][4650] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:25:54.264591 containerd[2014]: 2025-09-03 23:25:54.177 [INFO][4650] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" host="ip-172-31-22-232" Sep 3 23:25:54.264591 containerd[2014]: 2025-09-03 23:25:54.180 [INFO][4650] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c Sep 3 23:25:54.264591 containerd[2014]: 2025-09-03 23:25:54.186 [INFO][4650] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" host="ip-172-31-22-232" Sep 3 23:25:54.264591 containerd[2014]: 2025-09-03 23:25:54.195 [INFO][4650] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.42.65/26] block=192.168.42.64/26 handle="k8s-pod-network.82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" host="ip-172-31-22-232" Sep 3 23:25:54.264591 containerd[2014]: 2025-09-03 23:25:54.195 [INFO][4650] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.65/26] handle="k8s-pod-network.82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" host="ip-172-31-22-232" Sep 3 23:25:54.264591 containerd[2014]: 2025-09-03 23:25:54.196 [INFO][4650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:25:54.264591 containerd[2014]: 2025-09-03 23:25:54.196 [INFO][4650] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.65/26] IPv6=[] ContainerID="82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" HandleID="k8s-pod-network.82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" Workload="ip--172--31--22--232-k8s-whisker--7fb5b49f78--wvrp2-eth0" Sep 3 23:25:54.265670 containerd[2014]: 2025-09-03 23:25:54.205 [INFO][4639] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" Namespace="calico-system" Pod="whisker-7fb5b49f78-wvrp2" WorkloadEndpoint="ip--172--31--22--232-k8s-whisker--7fb5b49f78--wvrp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-whisker--7fb5b49f78--wvrp2-eth0", GenerateName:"whisker-7fb5b49f78-", Namespace:"calico-system", SelfLink:"", UID:"0b4312da-b2f2-4331-93d3-225874a7b63c", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7fb5b49f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"", Pod:"whisker-7fb5b49f78-wvrp2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.42.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali98e75b1e8d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:25:54.265670 containerd[2014]: 2025-09-03 23:25:54.205 [INFO][4639] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.65/32] ContainerID="82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" Namespace="calico-system" Pod="whisker-7fb5b49f78-wvrp2" WorkloadEndpoint="ip--172--31--22--232-k8s-whisker--7fb5b49f78--wvrp2-eth0" Sep 3 23:25:54.265897 containerd[2014]: 2025-09-03 23:25:54.205 [INFO][4639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98e75b1e8d3 ContainerID="82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" Namespace="calico-system" Pod="whisker-7fb5b49f78-wvrp2" WorkloadEndpoint="ip--172--31--22--232-k8s-whisker--7fb5b49f78--wvrp2-eth0" Sep 3 23:25:54.265897 containerd[2014]: 2025-09-03 23:25:54.225 [INFO][4639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" Namespace="calico-system" Pod="whisker-7fb5b49f78-wvrp2" WorkloadEndpoint="ip--172--31--22--232-k8s-whisker--7fb5b49f78--wvrp2-eth0" Sep 3 23:25:54.266004 containerd[2014]: 2025-09-03 23:25:54.227 [INFO][4639] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" Namespace="calico-system" Pod="whisker-7fb5b49f78-wvrp2" WorkloadEndpoint="ip--172--31--22--232-k8s-whisker--7fb5b49f78--wvrp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-whisker--7fb5b49f78--wvrp2-eth0", GenerateName:"whisker-7fb5b49f78-", Namespace:"calico-system", SelfLink:"", UID:"0b4312da-b2f2-4331-93d3-225874a7b63c", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7fb5b49f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c", Pod:"whisker-7fb5b49f78-wvrp2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.42.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali98e75b1e8d3", MAC:"ea:73:54:43:f9:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:25:54.266122 containerd[2014]: 2025-09-03 23:25:54.258 [INFO][4639] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" Namespace="calico-system" Pod="whisker-7fb5b49f78-wvrp2" WorkloadEndpoint="ip--172--31--22--232-k8s-whisker--7fb5b49f78--wvrp2-eth0" Sep 3 23:25:54.310660 containerd[2014]: time="2025-09-03T23:25:54.310493407Z" level=info msg="connecting to shim 82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c" address="unix:///run/containerd/s/d39237b8df6b524f0e30fb2c9ef035f9c4af43c9f4472652ce2baed59e4a4cf5" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:25:54.356064 systemd[1]: Started cri-containerd-82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c.scope - libcontainer container 82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c. Sep 3 23:25:54.444211 containerd[2014]: time="2025-09-03T23:25:54.444124004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fb5b49f78-wvrp2,Uid:0b4312da-b2f2-4331-93d3-225874a7b63c,Namespace:calico-system,Attempt:0,} returns sandbox id \"82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c\"" Sep 3 23:25:54.450049 containerd[2014]: time="2025-09-03T23:25:54.449027660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 3 23:25:54.586027 containerd[2014]: time="2025-09-03T23:25:54.585676028Z" level=info msg="TaskExit event in podsandbox handler container_id:\"691dae3a3f7536d59f269fe32611169838db9ba904e99a37a916e78ff3fd4a48\" id:\"606ef42823f32b6a784283548ee6289ff3137a200ca0092b29483dfc9858f1a7\" pid:4720 exit_status:1 exited_at:{seconds:1756941954 nanos:585276452}" Sep 3 23:25:55.313126 systemd-networkd[1894]: cali98e75b1e8d3: Gained IPv6LL Sep 3 23:25:55.812579 containerd[2014]: time="2025-09-03T23:25:55.812429362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:55.815280 containerd[2014]: time="2025-09-03T23:25:55.815121478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 3 23:25:55.818034 containerd[2014]: time="2025-09-03T23:25:55.817944946Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:55.826559 containerd[2014]: time="2025-09-03T23:25:55.826374034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:55.828992 containerd[2014]: time="2025-09-03T23:25:55.828718846Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.378644786s" Sep 3 23:25:55.828992 containerd[2014]: time="2025-09-03T23:25:55.828800890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 3 23:25:55.842696 containerd[2014]: time="2025-09-03T23:25:55.841915415Z" level=info msg="CreateContainer within sandbox \"82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 3 23:25:55.862255 containerd[2014]: time="2025-09-03T23:25:55.862189451Z" level=info msg="Container f10c267161cb6b9ed33f31cd844da26449587581a86aec3099545d924b9eacfd: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:55.877936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount404039061.mount: Deactivated successfully. Sep 3 23:25:55.897386 containerd[2014]: time="2025-09-03T23:25:55.897321455Z" level=info msg="CreateContainer within sandbox \"82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f10c267161cb6b9ed33f31cd844da26449587581a86aec3099545d924b9eacfd\"" Sep 3 23:25:55.898572 containerd[2014]: time="2025-09-03T23:25:55.898504007Z" level=info msg="StartContainer for \"f10c267161cb6b9ed33f31cd844da26449587581a86aec3099545d924b9eacfd\"" Sep 3 23:25:55.904814 containerd[2014]: time="2025-09-03T23:25:55.904047227Z" level=info msg="connecting to shim f10c267161cb6b9ed33f31cd844da26449587581a86aec3099545d924b9eacfd" address="unix:///run/containerd/s/d39237b8df6b524f0e30fb2c9ef035f9c4af43c9f4472652ce2baed59e4a4cf5" protocol=ttrpc version=3 Sep 3 23:25:55.954215 systemd[1]: Started cri-containerd-f10c267161cb6b9ed33f31cd844da26449587581a86aec3099545d924b9eacfd.scope - libcontainer container f10c267161cb6b9ed33f31cd844da26449587581a86aec3099545d924b9eacfd. Sep 3 23:25:56.066907 containerd[2014]: time="2025-09-03T23:25:56.066383528Z" level=info msg="StartContainer for \"f10c267161cb6b9ed33f31cd844da26449587581a86aec3099545d924b9eacfd\" returns successfully" Sep 3 23:25:56.072424 containerd[2014]: time="2025-09-03T23:25:56.072303560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 3 23:25:56.090251 containerd[2014]: time="2025-09-03T23:25:56.090201896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rw2cj,Uid:d56f78d9-c283-486e-aa03-f318d825ccff,Namespace:calico-system,Attempt:0,}" Sep 3 23:25:56.398097 systemd-networkd[1894]: cali65a2f607417: Link UP Sep 3 23:25:56.398455 systemd-networkd[1894]: cali65a2f607417: Gained carrier Sep 3 23:25:56.452530 containerd[2014]: 2025-09-03 23:25:56.202 [INFO][4892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--232-k8s-goldmane--54d579b49d--rw2cj-eth0 goldmane-54d579b49d- calico-system d56f78d9-c283-486e-aa03-f318d825ccff 826 0 2025-09-03 23:25:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-22-232 goldmane-54d579b49d-rw2cj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali65a2f607417 [] [] }} ContainerID="9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" Namespace="calico-system" Pod="goldmane-54d579b49d-rw2cj" WorkloadEndpoint="ip--172--31--22--232-k8s-goldmane--54d579b49d--rw2cj-" Sep 3 23:25:56.452530 containerd[2014]: 2025-09-03 23:25:56.204 [INFO][4892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" Namespace="calico-system" Pod="goldmane-54d579b49d-rw2cj" WorkloadEndpoint="ip--172--31--22--232-k8s-goldmane--54d579b49d--rw2cj-eth0" Sep 3 23:25:56.452530 containerd[2014]: 2025-09-03 23:25:56.301 [INFO][4909] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" HandleID="k8s-pod-network.9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" Workload="ip--172--31--22--232-k8s-goldmane--54d579b49d--rw2cj-eth0" Sep 3 23:25:56.452863 containerd[2014]: 2025-09-03 23:25:56.301 [INFO][4909] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" HandleID="k8s-pod-network.9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" Workload="ip--172--31--22--232-k8s-goldmane--54d579b49d--rw2cj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000102180), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-232", "pod":"goldmane-54d579b49d-rw2cj", "timestamp":"2025-09-03 23:25:56.301466661 +0000 UTC"}, Hostname:"ip-172-31-22-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:25:56.452863 containerd[2014]: 2025-09-03 23:25:56.301 [INFO][4909] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:25:56.452863 containerd[2014]: 2025-09-03 23:25:56.301 [INFO][4909] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:25:56.452863 containerd[2014]: 2025-09-03 23:25:56.301 [INFO][4909] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-232' Sep 3 23:25:56.452863 containerd[2014]: 2025-09-03 23:25:56.332 [INFO][4909] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" host="ip-172-31-22-232" Sep 3 23:25:56.452863 containerd[2014]: 2025-09-03 23:25:56.342 [INFO][4909] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-232" Sep 3 23:25:56.452863 containerd[2014]: 2025-09-03 23:25:56.352 [INFO][4909] ipam/ipam.go 511: Trying affinity for 192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:25:56.452863 containerd[2014]: 2025-09-03 23:25:56.357 [INFO][4909] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:25:56.452863 containerd[2014]: 2025-09-03 23:25:56.361 [INFO][4909] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:25:56.453289 containerd[2014]: 2025-09-03 23:25:56.361 [INFO][4909] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" host="ip-172-31-22-232" Sep 3 23:25:56.453289 containerd[2014]: 2025-09-03 23:25:56.364 [INFO][4909] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3 Sep 3 23:25:56.453289 containerd[2014]: 2025-09-03 23:25:56.372 [INFO][4909] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" host="ip-172-31-22-232" Sep 3 23:25:56.453289 containerd[2014]: 2025-09-03 23:25:56.386 [INFO][4909] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.42.66/26] block=192.168.42.64/26 handle="k8s-pod-network.9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" host="ip-172-31-22-232" Sep 3 23:25:56.453289 containerd[2014]: 2025-09-03 23:25:56.386 [INFO][4909] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.66/26] handle="k8s-pod-network.9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" host="ip-172-31-22-232" Sep 3 23:25:56.453289 containerd[2014]: 2025-09-03 23:25:56.386 [INFO][4909] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:25:56.453289 containerd[2014]: 2025-09-03 23:25:56.386 [INFO][4909] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.66/26] IPv6=[] ContainerID="9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" HandleID="k8s-pod-network.9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" Workload="ip--172--31--22--232-k8s-goldmane--54d579b49d--rw2cj-eth0" Sep 3 23:25:56.456753 containerd[2014]: 2025-09-03 23:25:56.390 [INFO][4892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" Namespace="calico-system" Pod="goldmane-54d579b49d-rw2cj" WorkloadEndpoint="ip--172--31--22--232-k8s-goldmane--54d579b49d--rw2cj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-goldmane--54d579b49d--rw2cj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"d56f78d9-c283-486e-aa03-f318d825ccff", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 25, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"", Pod:"goldmane-54d579b49d-rw2cj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.42.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali65a2f607417", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:25:56.456753 containerd[2014]: 2025-09-03 23:25:56.391 [INFO][4892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.66/32] ContainerID="9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" Namespace="calico-system" Pod="goldmane-54d579b49d-rw2cj" WorkloadEndpoint="ip--172--31--22--232-k8s-goldmane--54d579b49d--rw2cj-eth0" Sep 3 23:25:56.458367 containerd[2014]: 2025-09-03 23:25:56.391 [INFO][4892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65a2f607417 ContainerID="9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" Namespace="calico-system" Pod="goldmane-54d579b49d-rw2cj" WorkloadEndpoint="ip--172--31--22--232-k8s-goldmane--54d579b49d--rw2cj-eth0" Sep 3 23:25:56.458367 containerd[2014]: 2025-09-03 23:25:56.398 [INFO][4892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" Namespace="calico-system" Pod="goldmane-54d579b49d-rw2cj" WorkloadEndpoint="ip--172--31--22--232-k8s-goldmane--54d579b49d--rw2cj-eth0" Sep 3 23:25:56.458471 containerd[2014]: 2025-09-03 23:25:56.399 [INFO][4892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" Namespace="calico-system" Pod="goldmane-54d579b49d-rw2cj" WorkloadEndpoint="ip--172--31--22--232-k8s-goldmane--54d579b49d--rw2cj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-goldmane--54d579b49d--rw2cj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"d56f78d9-c283-486e-aa03-f318d825ccff", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 25, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3", Pod:"goldmane-54d579b49d-rw2cj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.42.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali65a2f607417", MAC:"ba:a4:31:c2:ac:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:25:56.458598 containerd[2014]: 2025-09-03 23:25:56.426 [INFO][4892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" Namespace="calico-system" Pod="goldmane-54d579b49d-rw2cj" WorkloadEndpoint="ip--172--31--22--232-k8s-goldmane--54d579b49d--rw2cj-eth0" Sep 3 23:25:56.474752 systemd-networkd[1894]: vxlan.calico: Link UP Sep 3 23:25:56.475103 systemd-networkd[1894]: vxlan.calico: Gained carrier Sep 3 23:25:56.479025 (udev-worker)[4585]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:25:56.539012 containerd[2014]: time="2025-09-03T23:25:56.538657378Z" level=info msg="connecting to shim 9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3" address="unix:///run/containerd/s/cab717d283472b7f636df6f5aa0275795591926b9bfe6acd81a2cc8fb2ef0eae" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:25:56.625652 systemd[1]: Started cri-containerd-9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3.scope - libcontainer container 9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3. Sep 3 23:25:56.712689 containerd[2014]: time="2025-09-03T23:25:56.712625543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rw2cj,Uid:d56f78d9-c283-486e-aa03-f318d825ccff,Namespace:calico-system,Attempt:0,} returns sandbox id \"9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3\"" Sep 3 23:25:57.616362 systemd-networkd[1894]: vxlan.calico: Gained IPv6LL Sep 3 23:25:58.087109 containerd[2014]: time="2025-09-03T23:25:58.087041218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kqbc9,Uid:6d7feb2e-230f-4abe-b461-c5642b8b4b42,Namespace:kube-system,Attempt:0,}" Sep 3 23:25:58.088807 containerd[2014]: time="2025-09-03T23:25:58.088622878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c598757d-96rvh,Uid:c9ca8be5-0cc5-4a2a-907d-00ff707fe983,Namespace:calico-apiserver,Attempt:0,}" Sep 3 23:25:58.320883 systemd-networkd[1894]: cali65a2f607417: Gained IPv6LL Sep 3 23:25:58.529698 systemd-networkd[1894]: cali2431963a5eb: Link UP Sep 3 23:25:58.530132 systemd-networkd[1894]: cali2431963a5eb: Gained carrier Sep 3 23:25:58.549133 (udev-worker)[4972]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:25:58.599917 containerd[2014]: 2025-09-03 23:25:58.303 [INFO][5046] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--232-k8s-calico--apiserver--68c598757d--96rvh-eth0 calico-apiserver-68c598757d- calico-apiserver c9ca8be5-0cc5-4a2a-907d-00ff707fe983 830 0 2025-09-03 23:25:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68c598757d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-232 calico-apiserver-68c598757d-96rvh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2431963a5eb [] [] }} ContainerID="2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" Namespace="calico-apiserver" Pod="calico-apiserver-68c598757d-96rvh" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--96rvh-" Sep 3 23:25:58.599917 containerd[2014]: 2025-09-03 23:25:58.303 [INFO][5046] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" Namespace="calico-apiserver" Pod="calico-apiserver-68c598757d-96rvh" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--96rvh-eth0" Sep 3 23:25:58.599917 containerd[2014]: 2025-09-03 23:25:58.385 [INFO][5074] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" HandleID="k8s-pod-network.2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" Workload="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--96rvh-eth0" Sep 3 23:25:58.600498 containerd[2014]: 2025-09-03 23:25:58.386 [INFO][5074] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" HandleID="k8s-pod-network.2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" Workload="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--96rvh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003adb70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-232", "pod":"calico-apiserver-68c598757d-96rvh", "timestamp":"2025-09-03 23:25:58.385352159 +0000 UTC"}, Hostname:"ip-172-31-22-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:25:58.600498 containerd[2014]: 2025-09-03 23:25:58.386 [INFO][5074] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:25:58.600498 containerd[2014]: 2025-09-03 23:25:58.386 [INFO][5074] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:25:58.600498 containerd[2014]: 2025-09-03 23:25:58.386 [INFO][5074] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-232' Sep 3 23:25:58.600498 containerd[2014]: 2025-09-03 23:25:58.418 [INFO][5074] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" host="ip-172-31-22-232" Sep 3 23:25:58.600498 containerd[2014]: 2025-09-03 23:25:58.439 [INFO][5074] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-232" Sep 3 23:25:58.600498 containerd[2014]: 2025-09-03 23:25:58.449 [INFO][5074] ipam/ipam.go 511: Trying affinity for 192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:25:58.600498 containerd[2014]: 2025-09-03 23:25:58.456 [INFO][5074] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:25:58.600498 containerd[2014]: 2025-09-03 23:25:58.465 [INFO][5074] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:25:58.601640 containerd[2014]: 2025-09-03 23:25:58.465 [INFO][5074] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" host="ip-172-31-22-232" Sep 3 23:25:58.601640 containerd[2014]: 2025-09-03 23:25:58.471 [INFO][5074] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc Sep 3 23:25:58.601640 containerd[2014]: 2025-09-03 23:25:58.486 [INFO][5074] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" host="ip-172-31-22-232" Sep 3 23:25:58.601640 containerd[2014]: 2025-09-03 23:25:58.504 [INFO][5074] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.42.67/26] block=192.168.42.64/26 handle="k8s-pod-network.2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" host="ip-172-31-22-232" Sep 3 23:25:58.601640 containerd[2014]: 2025-09-03 23:25:58.505 [INFO][5074] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.67/26] handle="k8s-pod-network.2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" host="ip-172-31-22-232" Sep 3 23:25:58.601640 containerd[2014]: 2025-09-03 23:25:58.509 [INFO][5074] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:25:58.601640 containerd[2014]: 2025-09-03 23:25:58.509 [INFO][5074] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.67/26] IPv6=[] ContainerID="2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" HandleID="k8s-pod-network.2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" Workload="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--96rvh-eth0" Sep 3 23:25:58.604189 containerd[2014]: 2025-09-03 23:25:58.516 [INFO][5046] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" Namespace="calico-apiserver" Pod="calico-apiserver-68c598757d-96rvh" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--96rvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--apiserver--68c598757d--96rvh-eth0", GenerateName:"calico-apiserver-68c598757d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9ca8be5-0cc5-4a2a-907d-00ff707fe983", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 25, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68c598757d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"", Pod:"calico-apiserver-68c598757d-96rvh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2431963a5eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:25:58.604365 containerd[2014]: 2025-09-03 23:25:58.518 [INFO][5046] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.67/32] ContainerID="2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" Namespace="calico-apiserver" Pod="calico-apiserver-68c598757d-96rvh" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--96rvh-eth0" Sep 3 23:25:58.604365 containerd[2014]: 2025-09-03 23:25:58.519 [INFO][5046] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2431963a5eb ContainerID="2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" Namespace="calico-apiserver" Pod="calico-apiserver-68c598757d-96rvh" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--96rvh-eth0" Sep 3 23:25:58.604365 containerd[2014]: 2025-09-03 23:25:58.528 [INFO][5046] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" Namespace="calico-apiserver" Pod="calico-apiserver-68c598757d-96rvh" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--96rvh-eth0" Sep 3 23:25:58.604512 containerd[2014]: 2025-09-03 23:25:58.530 [INFO][5046] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" Namespace="calico-apiserver" Pod="calico-apiserver-68c598757d-96rvh" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--96rvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--apiserver--68c598757d--96rvh-eth0", GenerateName:"calico-apiserver-68c598757d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9ca8be5-0cc5-4a2a-907d-00ff707fe983", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 25, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68c598757d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc", Pod:"calico-apiserver-68c598757d-96rvh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2431963a5eb", MAC:"d2:7b:90:a5:f8:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:25:58.604634 containerd[2014]: 2025-09-03 23:25:58.585 [INFO][5046] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" Namespace="calico-apiserver" Pod="calico-apiserver-68c598757d-96rvh" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--96rvh-eth0" Sep 3 23:25:58.748218 systemd-networkd[1894]: calif3f054ea60e: Link UP Sep 3 23:25:58.751930 systemd-networkd[1894]: calif3f054ea60e: Gained carrier Sep 3 23:25:58.790902 containerd[2014]: time="2025-09-03T23:25:58.790536133Z" level=info msg="connecting to shim 2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc" address="unix:///run/containerd/s/ea999b62d0f2e22053b23c4c364eced8bedb4c4ae8ed26f76e74e562ee248631" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:25:58.835820 containerd[2014]: 2025-09-03 23:25:58.272 [INFO][5045] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--232-k8s-coredns--674b8bbfcf--kqbc9-eth0 coredns-674b8bbfcf- kube-system 6d7feb2e-230f-4abe-b461-c5642b8b4b42 819 0 2025-09-03 23:25:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-232 coredns-674b8bbfcf-kqbc9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif3f054ea60e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqbc9" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--674b8bbfcf--kqbc9-" Sep 3 23:25:58.835820 containerd[2014]: 2025-09-03 23:25:58.273 [INFO][5045] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqbc9" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--674b8bbfcf--kqbc9-eth0" Sep 3 23:25:58.835820 containerd[2014]: 2025-09-03 23:25:58.430 [INFO][5069] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" HandleID="k8s-pod-network.7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" Workload="ip--172--31--22--232-k8s-coredns--674b8bbfcf--kqbc9-eth0" Sep 3 23:25:58.836129 containerd[2014]: 2025-09-03 23:25:58.430 [INFO][5069] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" HandleID="k8s-pod-network.7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" Workload="ip--172--31--22--232-k8s-coredns--674b8bbfcf--kqbc9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332820), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-232", "pod":"coredns-674b8bbfcf-kqbc9", "timestamp":"2025-09-03 23:25:58.430367975 +0000 UTC"}, Hostname:"ip-172-31-22-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:25:58.836129 containerd[2014]: 2025-09-03 23:25:58.431 [INFO][5069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:25:58.836129 containerd[2014]: 2025-09-03 23:25:58.507 [INFO][5069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:25:58.836129 containerd[2014]: 2025-09-03 23:25:58.507 [INFO][5069] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-232' Sep 3 23:25:58.836129 containerd[2014]: 2025-09-03 23:25:58.560 [INFO][5069] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" host="ip-172-31-22-232" Sep 3 23:25:58.836129 containerd[2014]: 2025-09-03 23:25:58.589 [INFO][5069] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-232" Sep 3 23:25:58.836129 containerd[2014]: 2025-09-03 23:25:58.613 [INFO][5069] ipam/ipam.go 511: Trying affinity for 192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:25:58.836129 containerd[2014]: 2025-09-03 23:25:58.632 [INFO][5069] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:25:58.836129 containerd[2014]: 2025-09-03 23:25:58.640 [INFO][5069] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:25:58.838373 containerd[2014]: 2025-09-03 23:25:58.640 [INFO][5069] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" host="ip-172-31-22-232" Sep 3 23:25:58.838373 containerd[2014]: 2025-09-03 23:25:58.647 [INFO][5069] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749 Sep 3 23:25:58.838373 containerd[2014]: 2025-09-03 23:25:58.664 [INFO][5069] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" host="ip-172-31-22-232" Sep 3 23:25:58.838373 containerd[2014]: 2025-09-03 23:25:58.697 [INFO][5069] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.42.68/26] block=192.168.42.64/26 handle="k8s-pod-network.7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" host="ip-172-31-22-232" Sep 3 23:25:58.838373 containerd[2014]: 2025-09-03 23:25:58.703 [INFO][5069] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.68/26] handle="k8s-pod-network.7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" host="ip-172-31-22-232" Sep 3 23:25:58.838373 containerd[2014]: 2025-09-03 23:25:58.705 [INFO][5069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:25:58.838373 containerd[2014]: 2025-09-03 23:25:58.705 [INFO][5069] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.68/26] IPv6=[] ContainerID="7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" HandleID="k8s-pod-network.7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" Workload="ip--172--31--22--232-k8s-coredns--674b8bbfcf--kqbc9-eth0" Sep 3 23:25:58.838700 containerd[2014]: 2025-09-03 23:25:58.718 [INFO][5045] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqbc9" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--674b8bbfcf--kqbc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-coredns--674b8bbfcf--kqbc9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6d7feb2e-230f-4abe-b461-c5642b8b4b42", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 25, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"", Pod:"coredns-674b8bbfcf-kqbc9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3f054ea60e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:25:58.838700 containerd[2014]: 2025-09-03 23:25:58.718 [INFO][5045] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.68/32] ContainerID="7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqbc9" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--674b8bbfcf--kqbc9-eth0" Sep 3 23:25:58.838700 containerd[2014]: 2025-09-03 23:25:58.718 [INFO][5045] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3f054ea60e ContainerID="7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqbc9" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--674b8bbfcf--kqbc9-eth0" Sep 3 23:25:58.838700 containerd[2014]: 2025-09-03 23:25:58.753 [INFO][5045] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqbc9" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--674b8bbfcf--kqbc9-eth0" Sep 3 23:25:58.838700 containerd[2014]: 2025-09-03 23:25:58.756 [INFO][5045] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqbc9" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--674b8bbfcf--kqbc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-coredns--674b8bbfcf--kqbc9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6d7feb2e-230f-4abe-b461-c5642b8b4b42", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 25, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749", Pod:"coredns-674b8bbfcf-kqbc9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3f054ea60e", MAC:"d2:9e:f5:c2:31:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:25:58.838700 containerd[2014]: 2025-09-03 23:25:58.800 [INFO][5045] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqbc9" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--674b8bbfcf--kqbc9-eth0" Sep 3 23:25:58.858109 systemd[1]: Started cri-containerd-2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc.scope - libcontainer container 2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc. Sep 3 23:25:58.955318 containerd[2014]: time="2025-09-03T23:25:58.954852626Z" level=info msg="connecting to shim 7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749" address="unix:///run/containerd/s/30a2a03b3d2c9114c1a226ba1be3069a24e299097167f3f3e85aa8ea8728b6f7" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:25:59.135099 systemd[1]: Started cri-containerd-7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749.scope - libcontainer container 7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749. Sep 3 23:25:59.160836 containerd[2014]: time="2025-09-03T23:25:59.160674059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c598757d-96rvh,Uid:c9ca8be5-0cc5-4a2a-907d-00ff707fe983,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc\"" Sep 3 23:25:59.331594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3354834445.mount: Deactivated successfully. Sep 3 23:25:59.396598 containerd[2014]: time="2025-09-03T23:25:59.396329820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:59.399501 containerd[2014]: time="2025-09-03T23:25:59.398881896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 3 23:25:59.400715 containerd[2014]: time="2025-09-03T23:25:59.400513668Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:59.405067 containerd[2014]: time="2025-09-03T23:25:59.404841984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kqbc9,Uid:6d7feb2e-230f-4abe-b461-c5642b8b4b42,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749\"" Sep 3 23:25:59.410742 containerd[2014]: time="2025-09-03T23:25:59.410655876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:59.422028 containerd[2014]: time="2025-09-03T23:25:59.421428804Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 3.349028128s" Sep 3 23:25:59.422028 containerd[2014]: time="2025-09-03T23:25:59.421508436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 3 23:25:59.433062 containerd[2014]: time="2025-09-03T23:25:59.431598696Z" level=info msg="CreateContainer within sandbox \"7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:25:59.433936 containerd[2014]: time="2025-09-03T23:25:59.433861044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 3 23:25:59.460132 containerd[2014]: time="2025-09-03T23:25:59.459994201Z" level=info msg="CreateContainer within sandbox \"82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 3 23:25:59.474817 containerd[2014]: time="2025-09-03T23:25:59.473945569Z" level=info msg="Container 158ec06a48acd9f44fa5083219459ec9c269ca5a3d2dddce45a93f995df68b80: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:59.501852 containerd[2014]: time="2025-09-03T23:25:59.501574933Z" level=info msg="CreateContainer within sandbox \"7e4dac4e1eb9e7f15e5872bd88921ea9df762b0f2bcdb1364691af0bdc815749\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"158ec06a48acd9f44fa5083219459ec9c269ca5a3d2dddce45a93f995df68b80\"" Sep 3 23:25:59.504832 containerd[2014]: time="2025-09-03T23:25:59.503082649Z" level=info msg="StartContainer for \"158ec06a48acd9f44fa5083219459ec9c269ca5a3d2dddce45a93f995df68b80\"" Sep 3 23:25:59.510119 containerd[2014]: time="2025-09-03T23:25:59.509448445Z" level=info msg="connecting to shim 158ec06a48acd9f44fa5083219459ec9c269ca5a3d2dddce45a93f995df68b80" address="unix:///run/containerd/s/30a2a03b3d2c9114c1a226ba1be3069a24e299097167f3f3e85aa8ea8728b6f7" protocol=ttrpc version=3 Sep 3 23:25:59.513744 containerd[2014]: time="2025-09-03T23:25:59.510172681Z" level=info msg="Container ec10c729b111b9c16c76978bb2f808b24923fe7b81de5aef7eaacbcfd1bb54d3: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:59.532366 containerd[2014]: time="2025-09-03T23:25:59.532287757Z" level=info msg="CreateContainer within sandbox \"82f6957de75f5026eaec38785ea2d37fd43c210d1b72fd109081370338355d0c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ec10c729b111b9c16c76978bb2f808b24923fe7b81de5aef7eaacbcfd1bb54d3\"" Sep 3 23:25:59.535343 containerd[2014]: time="2025-09-03T23:25:59.535251949Z" level=info msg="StartContainer for \"ec10c729b111b9c16c76978bb2f808b24923fe7b81de5aef7eaacbcfd1bb54d3\"" Sep 3 23:25:59.539179 containerd[2014]: time="2025-09-03T23:25:59.539061001Z" level=info msg="connecting to shim ec10c729b111b9c16c76978bb2f808b24923fe7b81de5aef7eaacbcfd1bb54d3" address="unix:///run/containerd/s/d39237b8df6b524f0e30fb2c9ef035f9c4af43c9f4472652ce2baed59e4a4cf5" protocol=ttrpc version=3 Sep 3 23:25:59.574513 systemd[1]: Started cri-containerd-158ec06a48acd9f44fa5083219459ec9c269ca5a3d2dddce45a93f995df68b80.scope - libcontainer container 158ec06a48acd9f44fa5083219459ec9c269ca5a3d2dddce45a93f995df68b80. Sep 3 23:25:59.595135 systemd[1]: Started cri-containerd-ec10c729b111b9c16c76978bb2f808b24923fe7b81de5aef7eaacbcfd1bb54d3.scope - libcontainer container ec10c729b111b9c16c76978bb2f808b24923fe7b81de5aef7eaacbcfd1bb54d3. Sep 3 23:25:59.712190 containerd[2014]: time="2025-09-03T23:25:59.711770294Z" level=info msg="StartContainer for \"158ec06a48acd9f44fa5083219459ec9c269ca5a3d2dddce45a93f995df68b80\" returns successfully" Sep 3 23:25:59.793689 systemd-networkd[1894]: calif3f054ea60e: Gained IPv6LL Sep 3 23:25:59.803011 containerd[2014]: time="2025-09-03T23:25:59.802881326Z" level=info msg="StartContainer for \"ec10c729b111b9c16c76978bb2f808b24923fe7b81de5aef7eaacbcfd1bb54d3\" returns successfully" Sep 3 23:26:00.048514 systemd-networkd[1894]: cali2431963a5eb: Gained IPv6LL Sep 3 23:26:00.087207 containerd[2014]: time="2025-09-03T23:26:00.086950248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c598757d-7xvxl,Uid:53a54f96-059e-4d90-a05b-e02c5f4b3458,Namespace:calico-apiserver,Attempt:0,}" Sep 3 23:26:00.087644 containerd[2014]: time="2025-09-03T23:26:00.087603828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w22l4,Uid:e6027470-9212-46bc-a2f0-6968361afd03,Namespace:calico-system,Attempt:0,}" Sep 3 23:26:00.089907 containerd[2014]: time="2025-09-03T23:26:00.089858340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mjc96,Uid:ea7e887c-2b17-4fdd-a3d8-04ca65bca8af,Namespace:kube-system,Attempt:0,}" Sep 3 23:26:00.649757 kubelet[3309]: I0903 23:26:00.649650 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7fb5b49f78-wvrp2" podStartSLOduration=2.667303966 podStartE2EDuration="7.649629878s" podCreationTimestamp="2025-09-03 23:25:53 +0000 UTC" firstStartedPulling="2025-09-03 23:25:54.448326692 +0000 UTC m=+46.698754193" lastFinishedPulling="2025-09-03 23:25:59.43065258 +0000 UTC m=+51.681080105" observedRunningTime="2025-09-03 23:26:00.582322898 +0000 UTC m=+52.832750495" watchObservedRunningTime="2025-09-03 23:26:00.649629878 +0000 UTC m=+52.900057427" Sep 3 23:26:00.687765 systemd-networkd[1894]: cali7ee75cbbe0a: Link UP Sep 3 23:26:00.694049 systemd-networkd[1894]: cali7ee75cbbe0a: Gained carrier Sep 3 23:26:00.769508 kubelet[3309]: I0903 23:26:00.769417 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kqbc9" podStartSLOduration=49.769395495 podStartE2EDuration="49.769395495s" podCreationTimestamp="2025-09-03 23:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:26:00.65228099 +0000 UTC m=+52.902708515" watchObservedRunningTime="2025-09-03 23:26:00.769395495 +0000 UTC m=+53.019823020" Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.269 [INFO][5283] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--232-k8s-coredns--674b8bbfcf--mjc96-eth0 coredns-674b8bbfcf- kube-system ea7e887c-2b17-4fdd-a3d8-04ca65bca8af 827 0 2025-09-03 23:25:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-232 coredns-674b8bbfcf-mjc96 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7ee75cbbe0a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" Namespace="kube-system" Pod="coredns-674b8bbfcf-mjc96" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--674b8bbfcf--mjc96-" Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.270 [INFO][5283] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" Namespace="kube-system" Pod="coredns-674b8bbfcf-mjc96" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--674b8bbfcf--mjc96-eth0" Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.397 [INFO][5309] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" HandleID="k8s-pod-network.0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" Workload="ip--172--31--22--232-k8s-coredns--674b8bbfcf--mjc96-eth0" Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.399 [INFO][5309] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" HandleID="k8s-pod-network.0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" Workload="ip--172--31--22--232-k8s-coredns--674b8bbfcf--mjc96-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004da40), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-232", "pod":"coredns-674b8bbfcf-mjc96", "timestamp":"2025-09-03 23:26:00.397307905 +0000 UTC"}, Hostname:"ip-172-31-22-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.401 [INFO][5309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.401 [INFO][5309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.401 [INFO][5309] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-232' Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.447 [INFO][5309] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" host="ip-172-31-22-232" Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.475 [INFO][5309] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-232" Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.517 [INFO][5309] ipam/ipam.go 511: Trying affinity for 192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.534 [INFO][5309] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.555 [INFO][5309] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.555 [INFO][5309] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" host="ip-172-31-22-232" Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.566 [INFO][5309] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8 Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.600 [INFO][5309] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" host="ip-172-31-22-232" Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.667 [INFO][5309] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.42.69/26] block=192.168.42.64/26 handle="k8s-pod-network.0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" host="ip-172-31-22-232" Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.667 [INFO][5309] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.69/26] handle="k8s-pod-network.0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" host="ip-172-31-22-232" Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.667 [INFO][5309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:26:00.781922 containerd[2014]: 2025-09-03 23:26:00.667 [INFO][5309] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.69/26] IPv6=[] ContainerID="0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" HandleID="k8s-pod-network.0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" Workload="ip--172--31--22--232-k8s-coredns--674b8bbfcf--mjc96-eth0" Sep 3 23:26:00.783453 containerd[2014]: 2025-09-03 23:26:00.676 [INFO][5283] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" Namespace="kube-system" Pod="coredns-674b8bbfcf-mjc96" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--674b8bbfcf--mjc96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-coredns--674b8bbfcf--mjc96-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ea7e887c-2b17-4fdd-a3d8-04ca65bca8af", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 25, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"", Pod:"coredns-674b8bbfcf-mjc96", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ee75cbbe0a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:00.783453 containerd[2014]: 2025-09-03 23:26:00.677 [INFO][5283] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.69/32] ContainerID="0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" Namespace="kube-system" Pod="coredns-674b8bbfcf-mjc96" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--674b8bbfcf--mjc96-eth0" Sep 3 23:26:00.783453 containerd[2014]: 2025-09-03 23:26:00.678 [INFO][5283] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ee75cbbe0a ContainerID="0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" Namespace="kube-system" Pod="coredns-674b8bbfcf-mjc96" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--674b8bbfcf--mjc96-eth0" Sep 3 23:26:00.783453 containerd[2014]: 2025-09-03 23:26:00.687 [INFO][5283] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" Namespace="kube-system" Pod="coredns-674b8bbfcf-mjc96" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--674b8bbfcf--mjc96-eth0" Sep 3 23:26:00.783453 containerd[2014]: 2025-09-03 23:26:00.692 [INFO][5283] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" Namespace="kube-system" Pod="coredns-674b8bbfcf-mjc96" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--674b8bbfcf--mjc96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-coredns--674b8bbfcf--mjc96-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ea7e887c-2b17-4fdd-a3d8-04ca65bca8af", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 25, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8", Pod:"coredns-674b8bbfcf-mjc96", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ee75cbbe0a", MAC:"5e:ae:fe:fd:44:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:00.783453 containerd[2014]: 2025-09-03 23:26:00.770 [INFO][5283] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" Namespace="kube-system" Pod="coredns-674b8bbfcf-mjc96" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--674b8bbfcf--mjc96-eth0" Sep 3 23:26:00.972660 systemd-networkd[1894]: califca142e8d3a: Link UP Sep 3 23:26:00.984918 systemd-networkd[1894]: califca142e8d3a: Gained carrier Sep 3 23:26:01.007047 containerd[2014]: time="2025-09-03T23:26:01.006388944Z" level=info msg="connecting to shim 0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8" address="unix:///run/containerd/s/719bb4d9839e7eedf9e2b0ed6147b4431f5c5b35f61323a9fbfc71611e526f3c" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.286 [INFO][5282] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--232-k8s-csi--node--driver--w22l4-eth0 csi-node-driver- calico-system e6027470-9212-46bc-a2f0-6968361afd03 699 0 2025-09-03 23:25:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-22-232 csi-node-driver-w22l4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califca142e8d3a [] [] }} ContainerID="90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" Namespace="calico-system" Pod="csi-node-driver-w22l4" WorkloadEndpoint="ip--172--31--22--232-k8s-csi--node--driver--w22l4-" Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.286 [INFO][5282] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" Namespace="calico-system" Pod="csi-node-driver-w22l4" WorkloadEndpoint="ip--172--31--22--232-k8s-csi--node--driver--w22l4-eth0" Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.469 [INFO][5316] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" HandleID="k8s-pod-network.90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" Workload="ip--172--31--22--232-k8s-csi--node--driver--w22l4-eth0" Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.475 [INFO][5316] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" HandleID="k8s-pod-network.90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" Workload="ip--172--31--22--232-k8s-csi--node--driver--w22l4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd210), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-232", "pod":"csi-node-driver-w22l4", "timestamp":"2025-09-03 23:26:00.469728218 +0000 UTC"}, Hostname:"ip-172-31-22-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.475 [INFO][5316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.668 [INFO][5316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.668 [INFO][5316] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-232' Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.733 [INFO][5316] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" host="ip-172-31-22-232" Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.750 [INFO][5316] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-232" Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.767 [INFO][5316] ipam/ipam.go 511: Trying affinity for 192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.805 [INFO][5316] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.812 [INFO][5316] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.812 [INFO][5316] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" host="ip-172-31-22-232" Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.821 [INFO][5316] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098 Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.851 [INFO][5316] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" host="ip-172-31-22-232" Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.906 [INFO][5316] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.42.70/26] block=192.168.42.64/26 handle="k8s-pod-network.90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" host="ip-172-31-22-232" Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.906 [INFO][5316] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.70/26] handle="k8s-pod-network.90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" host="ip-172-31-22-232" Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.911 [INFO][5316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:26:01.066540 containerd[2014]: 2025-09-03 23:26:00.911 [INFO][5316] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.70/26] IPv6=[] ContainerID="90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" HandleID="k8s-pod-network.90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" Workload="ip--172--31--22--232-k8s-csi--node--driver--w22l4-eth0" Sep 3 23:26:01.068472 containerd[2014]: 2025-09-03 23:26:00.936 [INFO][5282] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" Namespace="calico-system" Pod="csi-node-driver-w22l4" WorkloadEndpoint="ip--172--31--22--232-k8s-csi--node--driver--w22l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-csi--node--driver--w22l4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e6027470-9212-46bc-a2f0-6968361afd03", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 25, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"", Pod:"csi-node-driver-w22l4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califca142e8d3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:01.068472 containerd[2014]: 2025-09-03 23:26:00.940 [INFO][5282] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.70/32] ContainerID="90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" Namespace="calico-system" Pod="csi-node-driver-w22l4" WorkloadEndpoint="ip--172--31--22--232-k8s-csi--node--driver--w22l4-eth0" Sep 3 23:26:01.068472 containerd[2014]: 2025-09-03 23:26:00.940 [INFO][5282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califca142e8d3a ContainerID="90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" Namespace="calico-system" Pod="csi-node-driver-w22l4" WorkloadEndpoint="ip--172--31--22--232-k8s-csi--node--driver--w22l4-eth0" Sep 3 23:26:01.068472 containerd[2014]: 2025-09-03 23:26:00.994 [INFO][5282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" Namespace="calico-system" Pod="csi-node-driver-w22l4" WorkloadEndpoint="ip--172--31--22--232-k8s-csi--node--driver--w22l4-eth0" Sep 3 23:26:01.068472 containerd[2014]: 2025-09-03 23:26:00.997 [INFO][5282] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" Namespace="calico-system" Pod="csi-node-driver-w22l4" WorkloadEndpoint="ip--172--31--22--232-k8s-csi--node--driver--w22l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-csi--node--driver--w22l4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e6027470-9212-46bc-a2f0-6968361afd03", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 25, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098", Pod:"csi-node-driver-w22l4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califca142e8d3a", MAC:"46:6d:fa:c8:40:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:01.068472 containerd[2014]: 2025-09-03 23:26:01.040 [INFO][5282] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" Namespace="calico-system" Pod="csi-node-driver-w22l4" WorkloadEndpoint="ip--172--31--22--232-k8s-csi--node--driver--w22l4-eth0" Sep 3 23:26:01.087700 containerd[2014]: time="2025-09-03T23:26:01.087598597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6987d979c4-jmhf4,Uid:159ac0cf-7cf5-42c9-83e3-bcd858a040ef,Namespace:calico-system,Attempt:0,}" Sep 3 23:26:01.152843 systemd[1]: Started cri-containerd-0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8.scope - libcontainer container 0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8. Sep 3 23:26:01.286051 systemd-networkd[1894]: cali81b70fb735c: Link UP Sep 3 23:26:01.291520 systemd-networkd[1894]: cali81b70fb735c: Gained carrier Sep 3 23:26:01.307245 containerd[2014]: time="2025-09-03T23:26:01.307133126Z" level=info msg="connecting to shim 90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098" address="unix:///run/containerd/s/c196d9fdf5dc92d0a55159013417073736f6d9b41866f86f0542dadc821132bf" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:00.276 [INFO][5272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--232-k8s-calico--apiserver--68c598757d--7xvxl-eth0 calico-apiserver-68c598757d- calico-apiserver 53a54f96-059e-4d90-a05b-e02c5f4b3458 829 0 2025-09-03 23:25:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68c598757d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-232 calico-apiserver-68c598757d-7xvxl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali81b70fb735c [] [] }} ContainerID="9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" Namespace="calico-apiserver" Pod="calico-apiserver-68c598757d-7xvxl" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--7xvxl-" Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:00.276 [INFO][5272] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" Namespace="calico-apiserver" Pod="calico-apiserver-68c598757d-7xvxl" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--7xvxl-eth0" Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:00.479 [INFO][5315] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" HandleID="k8s-pod-network.9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" Workload="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--7xvxl-eth0" Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:00.480 [INFO][5315] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" HandleID="k8s-pod-network.9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" Workload="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--7xvxl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103bf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-232", "pod":"calico-apiserver-68c598757d-7xvxl", "timestamp":"2025-09-03 23:26:00.479676386 +0000 UTC"}, Hostname:"ip-172-31-22-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:00.481 [INFO][5315] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:00.907 [INFO][5315] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:00.908 [INFO][5315] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-232' Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:00.970 [INFO][5315] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" host="ip-172-31-22-232" Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:01.008 [INFO][5315] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-232" Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:01.033 [INFO][5315] ipam/ipam.go 511: Trying affinity for 192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:01.057 [INFO][5315] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:01.080 [INFO][5315] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:01.081 [INFO][5315] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" host="ip-172-31-22-232" Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:01.092 [INFO][5315] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:01.140 [INFO][5315] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" host="ip-172-31-22-232" Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:01.200 [INFO][5315] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.42.71/26] block=192.168.42.64/26 handle="k8s-pod-network.9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" host="ip-172-31-22-232" Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:01.200 [INFO][5315] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.71/26] handle="k8s-pod-network.9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" host="ip-172-31-22-232" Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:01.201 [INFO][5315] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:26:01.401534 containerd[2014]: 2025-09-03 23:26:01.201 [INFO][5315] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.71/26] IPv6=[] ContainerID="9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" HandleID="k8s-pod-network.9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" Workload="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--7xvxl-eth0" Sep 3 23:26:01.404732 containerd[2014]: 2025-09-03 23:26:01.249 [INFO][5272] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" Namespace="calico-apiserver" Pod="calico-apiserver-68c598757d-7xvxl" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--7xvxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--apiserver--68c598757d--7xvxl-eth0", GenerateName:"calico-apiserver-68c598757d-", Namespace:"calico-apiserver", SelfLink:"", UID:"53a54f96-059e-4d90-a05b-e02c5f4b3458", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 25, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68c598757d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"", Pod:"calico-apiserver-68c598757d-7xvxl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81b70fb735c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:01.404732 containerd[2014]: 2025-09-03 23:26:01.249 [INFO][5272] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.71/32] ContainerID="9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" Namespace="calico-apiserver" Pod="calico-apiserver-68c598757d-7xvxl" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--7xvxl-eth0" Sep 3 23:26:01.404732 containerd[2014]: 2025-09-03 23:26:01.249 [INFO][5272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81b70fb735c ContainerID="9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" Namespace="calico-apiserver" Pod="calico-apiserver-68c598757d-7xvxl" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--7xvxl-eth0" Sep 3 23:26:01.404732 containerd[2014]: 2025-09-03 23:26:01.303 [INFO][5272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" Namespace="calico-apiserver" Pod="calico-apiserver-68c598757d-7xvxl" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--7xvxl-eth0" Sep 3 23:26:01.404732 containerd[2014]: 2025-09-03 23:26:01.317 [INFO][5272] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" Namespace="calico-apiserver" Pod="calico-apiserver-68c598757d-7xvxl" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--7xvxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--apiserver--68c598757d--7xvxl-eth0", GenerateName:"calico-apiserver-68c598757d-", Namespace:"calico-apiserver", SelfLink:"", UID:"53a54f96-059e-4d90-a05b-e02c5f4b3458", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 25, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68c598757d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd", Pod:"calico-apiserver-68c598757d-7xvxl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81b70fb735c", MAC:"56:51:89:a8:3f:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:01.404732 containerd[2014]: 2025-09-03 23:26:01.387 [INFO][5272] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" Namespace="calico-apiserver" Pod="calico-apiserver-68c598757d-7xvxl" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--68c598757d--7xvxl-eth0" Sep 3 23:26:01.448286 systemd[1]: Started cri-containerd-90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098.scope - libcontainer container 90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098. Sep 3 23:26:01.556069 containerd[2014]: time="2025-09-03T23:26:01.555908031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mjc96,Uid:ea7e887c-2b17-4fdd-a3d8-04ca65bca8af,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8\"" Sep 3 23:26:01.597005 containerd[2014]: time="2025-09-03T23:26:01.596730891Z" level=info msg="CreateContainer within sandbox \"0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:26:01.615007 containerd[2014]: time="2025-09-03T23:26:01.614742219Z" level=info msg="connecting to shim 9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd" address="unix:///run/containerd/s/ef5101b6dc6e0136a3c4f7f1b75f88f15781704ec78f272dd755dc3c4643eb2d" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:01.776078 containerd[2014]: time="2025-09-03T23:26:01.775731496Z" level=info msg="Container b8acee15f3276ba5b41662c2a412ba534b296c16bdb9c5c3a24c238dd9d8179d: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:01.777456 containerd[2014]: time="2025-09-03T23:26:01.777311644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w22l4,Uid:e6027470-9212-46bc-a2f0-6968361afd03,Namespace:calico-system,Attempt:0,} returns sandbox id \"90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098\"" Sep 3 23:26:01.801387 containerd[2014]: time="2025-09-03T23:26:01.798419380Z" level=info msg="CreateContainer within sandbox \"0f5ccdd87d0db5127ef6f9dee44b3b4eb0dcc124c497a258c649afa87bae1ae8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8acee15f3276ba5b41662c2a412ba534b296c16bdb9c5c3a24c238dd9d8179d\"" Sep 3 23:26:01.810167 containerd[2014]: time="2025-09-03T23:26:01.808829596Z" level=info msg="StartContainer for \"b8acee15f3276ba5b41662c2a412ba534b296c16bdb9c5c3a24c238dd9d8179d\"" Sep 3 23:26:01.811876 containerd[2014]: time="2025-09-03T23:26:01.810729796Z" level=info msg="connecting to shim b8acee15f3276ba5b41662c2a412ba534b296c16bdb9c5c3a24c238dd9d8179d" address="unix:///run/containerd/s/719bb4d9839e7eedf9e2b0ed6147b4431f5c5b35f61323a9fbfc71611e526f3c" protocol=ttrpc version=3 Sep 3 23:26:01.827229 systemd[1]: Started cri-containerd-9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd.scope - libcontainer container 9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd. Sep 3 23:26:01.883369 systemd[1]: Started cri-containerd-b8acee15f3276ba5b41662c2a412ba534b296c16bdb9c5c3a24c238dd9d8179d.scope - libcontainer container b8acee15f3276ba5b41662c2a412ba534b296c16bdb9c5c3a24c238dd9d8179d. Sep 3 23:26:01.990576 systemd-networkd[1894]: calia1fd12eb866: Link UP Sep 3 23:26:01.994701 systemd-networkd[1894]: calia1fd12eb866: Gained carrier Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.452 [INFO][5375] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--232-k8s-calico--kube--controllers--6987d979c4--jmhf4-eth0 calico-kube-controllers-6987d979c4- calico-system 159ac0cf-7cf5-42c9-83e3-bcd858a040ef 825 0 2025-09-03 23:25:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6987d979c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-22-232 calico-kube-controllers-6987d979c4-jmhf4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia1fd12eb866 [] [] }} ContainerID="8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" Namespace="calico-system" Pod="calico-kube-controllers-6987d979c4-jmhf4" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--kube--controllers--6987d979c4--jmhf4-" Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.452 [INFO][5375] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" Namespace="calico-system" Pod="calico-kube-controllers-6987d979c4-jmhf4" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--kube--controllers--6987d979c4--jmhf4-eth0" Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.788 [INFO][5447] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" HandleID="k8s-pod-network.8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" Workload="ip--172--31--22--232-k8s-calico--kube--controllers--6987d979c4--jmhf4-eth0" Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.790 [INFO][5447] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" HandleID="k8s-pod-network.8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" Workload="ip--172--31--22--232-k8s-calico--kube--controllers--6987d979c4--jmhf4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039c340), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-232", "pod":"calico-kube-controllers-6987d979c4-jmhf4", "timestamp":"2025-09-03 23:26:01.78793828 +0000 UTC"}, Hostname:"ip-172-31-22-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.791 [INFO][5447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.791 [INFO][5447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.791 [INFO][5447] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-232' Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.861 [INFO][5447] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" host="ip-172-31-22-232" Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.882 [INFO][5447] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-232" Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.899 [INFO][5447] ipam/ipam.go 511: Trying affinity for 192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.906 [INFO][5447] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.916 [INFO][5447] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ip-172-31-22-232" Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.916 [INFO][5447] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" host="ip-172-31-22-232" Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.919 [INFO][5447] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7 Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.943 [INFO][5447] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" host="ip-172-31-22-232" Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.961 [INFO][5447] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.42.72/26] block=192.168.42.64/26 handle="k8s-pod-network.8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" host="ip-172-31-22-232" Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.961 [INFO][5447] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.72/26] handle="k8s-pod-network.8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" host="ip-172-31-22-232" Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.962 [INFO][5447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:26:02.068473 containerd[2014]: 2025-09-03 23:26:01.962 [INFO][5447] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.72/26] IPv6=[] ContainerID="8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" HandleID="k8s-pod-network.8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" Workload="ip--172--31--22--232-k8s-calico--kube--controllers--6987d979c4--jmhf4-eth0" Sep 3 23:26:02.070776 containerd[2014]: 2025-09-03 23:26:01.971 [INFO][5375] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" Namespace="calico-system" Pod="calico-kube-controllers-6987d979c4-jmhf4" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--kube--controllers--6987d979c4--jmhf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--kube--controllers--6987d979c4--jmhf4-eth0", GenerateName:"calico-kube-controllers-6987d979c4-", Namespace:"calico-system", SelfLink:"", UID:"159ac0cf-7cf5-42c9-83e3-bcd858a040ef", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 25, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6987d979c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"", Pod:"calico-kube-controllers-6987d979c4-jmhf4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1fd12eb866", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:02.070776 containerd[2014]: 2025-09-03 23:26:01.972 [INFO][5375] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.72/32] ContainerID="8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" Namespace="calico-system" Pod="calico-kube-controllers-6987d979c4-jmhf4" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--kube--controllers--6987d979c4--jmhf4-eth0" Sep 3 23:26:02.070776 containerd[2014]: 2025-09-03 23:26:01.972 [INFO][5375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1fd12eb866 ContainerID="8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" Namespace="calico-system" Pod="calico-kube-controllers-6987d979c4-jmhf4" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--kube--controllers--6987d979c4--jmhf4-eth0" Sep 3 23:26:02.070776 containerd[2014]: 2025-09-03 23:26:02.000 [INFO][5375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" Namespace="calico-system" Pod="calico-kube-controllers-6987d979c4-jmhf4" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--kube--controllers--6987d979c4--jmhf4-eth0" Sep 3 23:26:02.070776 containerd[2014]: 2025-09-03 23:26:02.019 [INFO][5375] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" Namespace="calico-system" Pod="calico-kube-controllers-6987d979c4-jmhf4" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--kube--controllers--6987d979c4--jmhf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--kube--controllers--6987d979c4--jmhf4-eth0", GenerateName:"calico-kube-controllers-6987d979c4-", Namespace:"calico-system", SelfLink:"", UID:"159ac0cf-7cf5-42c9-83e3-bcd858a040ef", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 25, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6987d979c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7", Pod:"calico-kube-controllers-6987d979c4-jmhf4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1fd12eb866", MAC:"8a:4a:4a:06:15:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:02.070776 containerd[2014]: 2025-09-03 23:26:02.049 [INFO][5375] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" Namespace="calico-system" Pod="calico-kube-controllers-6987d979c4-jmhf4" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--kube--controllers--6987d979c4--jmhf4-eth0" Sep 3 23:26:02.113010 containerd[2014]: time="2025-09-03T23:26:02.112924958Z" level=info msg="StartContainer for \"b8acee15f3276ba5b41662c2a412ba534b296c16bdb9c5c3a24c238dd9d8179d\" returns successfully" Sep 3 23:26:02.152062 containerd[2014]: time="2025-09-03T23:26:02.151865546Z" level=info msg="connecting to shim 8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7" address="unix:///run/containerd/s/dc65840f88e82a6b21e8438fe1b04d922b12eabcb439a5ac4afea3e7cac6ce6e" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:02.233508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1086535991.mount: Deactivated successfully. Sep 3 23:26:02.260139 systemd[1]: Started cri-containerd-8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7.scope - libcontainer container 8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7. Sep 3 23:26:02.288523 systemd-networkd[1894]: califca142e8d3a: Gained IPv6LL Sep 3 23:26:02.386648 containerd[2014]: time="2025-09-03T23:26:02.386200179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c598757d-7xvxl,Uid:53a54f96-059e-4d90-a05b-e02c5f4b3458,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd\"" Sep 3 23:26:02.480587 systemd-networkd[1894]: cali81b70fb735c: Gained IPv6LL Sep 3 23:26:02.546105 systemd-networkd[1894]: cali7ee75cbbe0a: Gained IPv6LL Sep 3 23:26:02.613116 containerd[2014]: time="2025-09-03T23:26:02.613046440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6987d979c4-jmhf4,Uid:159ac0cf-7cf5-42c9-83e3-bcd858a040ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7\"" Sep 3 23:26:02.639334 kubelet[3309]: I0903 23:26:02.639110 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mjc96" podStartSLOduration=51.639084592 podStartE2EDuration="51.639084592s" podCreationTimestamp="2025-09-03 23:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:26:02.63666526 +0000 UTC m=+54.887092869" watchObservedRunningTime="2025-09-03 23:26:02.639084592 +0000 UTC m=+54.889512105" Sep 3 23:26:03.312724 systemd-networkd[1894]: calia1fd12eb866: Gained IPv6LL Sep 3 23:26:03.967249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3630656161.mount: Deactivated successfully. Sep 3 23:26:04.768831 containerd[2014]: time="2025-09-03T23:26:04.768419875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:04.770722 containerd[2014]: time="2025-09-03T23:26:04.770660083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 3 23:26:04.773542 containerd[2014]: time="2025-09-03T23:26:04.773466043Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:04.777824 containerd[2014]: time="2025-09-03T23:26:04.777684199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:04.779979 containerd[2014]: time="2025-09-03T23:26:04.779922199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 5.344282851s" Sep 3 23:26:04.780236 containerd[2014]: time="2025-09-03T23:26:04.780083335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 3 23:26:04.783882 containerd[2014]: time="2025-09-03T23:26:04.783658027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 3 23:26:04.792861 containerd[2014]: time="2025-09-03T23:26:04.792589327Z" level=info msg="CreateContainer within sandbox \"9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 3 23:26:04.823488 containerd[2014]: time="2025-09-03T23:26:04.823257871Z" level=info msg="Container 07cedb71cee88d32d175ce0aa6e4c6dc0ae7949e1f2f32719387537b52008a55: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:04.836981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211912116.mount: Deactivated successfully. Sep 3 23:26:04.847096 containerd[2014]: time="2025-09-03T23:26:04.846912511Z" level=info msg="CreateContainer within sandbox \"9dfa7d1e8b09f1894eb03bf2ebc560d8db2a6c1ee192c7a569f4c7423e74e0f3\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"07cedb71cee88d32d175ce0aa6e4c6dc0ae7949e1f2f32719387537b52008a55\"" Sep 3 23:26:04.848477 containerd[2014]: time="2025-09-03T23:26:04.848420719Z" level=info msg="StartContainer for \"07cedb71cee88d32d175ce0aa6e4c6dc0ae7949e1f2f32719387537b52008a55\"" Sep 3 23:26:04.853383 containerd[2014]: time="2025-09-03T23:26:04.853294495Z" level=info msg="connecting to shim 07cedb71cee88d32d175ce0aa6e4c6dc0ae7949e1f2f32719387537b52008a55" address="unix:///run/containerd/s/cab717d283472b7f636df6f5aa0275795591926b9bfe6acd81a2cc8fb2ef0eae" protocol=ttrpc version=3 Sep 3 23:26:04.904095 systemd[1]: Started cri-containerd-07cedb71cee88d32d175ce0aa6e4c6dc0ae7949e1f2f32719387537b52008a55.scope - libcontainer container 07cedb71cee88d32d175ce0aa6e4c6dc0ae7949e1f2f32719387537b52008a55. Sep 3 23:26:04.997466 containerd[2014]: time="2025-09-03T23:26:04.997340984Z" level=info msg="StartContainer for \"07cedb71cee88d32d175ce0aa6e4c6dc0ae7949e1f2f32719387537b52008a55\" returns successfully" Sep 3 23:26:05.502233 ntpd[1969]: Listen normally on 7 vxlan.calico 192.168.42.64:123 Sep 3 23:26:05.502383 ntpd[1969]: Listen normally on 8 cali98e75b1e8d3 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 3 23:26:05.502830 ntpd[1969]: 3 Sep 23:26:05 ntpd[1969]: Listen normally on 7 vxlan.calico 192.168.42.64:123 Sep 3 23:26:05.502830 ntpd[1969]: 3 Sep 23:26:05 ntpd[1969]: Listen normally on 8 cali98e75b1e8d3 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 3 23:26:05.502830 ntpd[1969]: 3 Sep 23:26:05 ntpd[1969]: Listen normally on 9 cali65a2f607417 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 3 23:26:05.502830 ntpd[1969]: 3 Sep 23:26:05 ntpd[1969]: Listen normally on 10 vxlan.calico [fe80::64d3:fbff:fe0d:4cdb%6]:123 Sep 3 23:26:05.502830 ntpd[1969]: 3 Sep 23:26:05 ntpd[1969]: Listen normally on 11 cali2431963a5eb [fe80::ecee:eeff:feee:eeee%9]:123 Sep 3 23:26:05.502830 ntpd[1969]: 3 Sep 23:26:05 ntpd[1969]: Listen normally on 12 calif3f054ea60e [fe80::ecee:eeff:feee:eeee%10]:123 Sep 3 23:26:05.502830 ntpd[1969]: 3 Sep 23:26:05 ntpd[1969]: Listen normally on 13 cali7ee75cbbe0a [fe80::ecee:eeff:feee:eeee%11]:123 Sep 3 23:26:05.502463 ntpd[1969]: Listen normally on 9 cali65a2f607417 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 3 23:26:05.503338 ntpd[1969]: 3 Sep 23:26:05 ntpd[1969]: Listen normally on 14 califca142e8d3a [fe80::ecee:eeff:feee:eeee%12]:123 Sep 3 23:26:05.503338 ntpd[1969]: 3 Sep 23:26:05 ntpd[1969]: Listen normally on 15 cali81b70fb735c [fe80::ecee:eeff:feee:eeee%13]:123 Sep 3 23:26:05.503338 ntpd[1969]: 3 Sep 23:26:05 ntpd[1969]: Listen normally on 16 calia1fd12eb866 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 3 23:26:05.502529 ntpd[1969]: Listen normally on 10 vxlan.calico [fe80::64d3:fbff:fe0d:4cdb%6]:123 Sep 3 23:26:05.502593 ntpd[1969]: Listen normally on 11 cali2431963a5eb [fe80::ecee:eeff:feee:eeee%9]:123 Sep 3 23:26:05.502656 ntpd[1969]: Listen normally on 12 calif3f054ea60e [fe80::ecee:eeff:feee:eeee%10]:123 Sep 3 23:26:05.502728 ntpd[1969]: Listen normally on 13 cali7ee75cbbe0a [fe80::ecee:eeff:feee:eeee%11]:123 Sep 3 23:26:05.502852 ntpd[1969]: Listen normally on 14 califca142e8d3a [fe80::ecee:eeff:feee:eeee%12]:123 Sep 3 23:26:05.502943 ntpd[1969]: Listen normally on 15 cali81b70fb735c [fe80::ecee:eeff:feee:eeee%13]:123 Sep 3 23:26:05.503015 ntpd[1969]: Listen normally on 16 calia1fd12eb866 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 3 23:26:05.666086 kubelet[3309]: I0903 23:26:05.664674 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-rw2cj" podStartSLOduration=19.597749507 podStartE2EDuration="27.664647235s" podCreationTimestamp="2025-09-03 23:25:38 +0000 UTC" firstStartedPulling="2025-09-03 23:25:56.714922067 +0000 UTC m=+48.965349580" lastFinishedPulling="2025-09-03 23:26:04.781819783 +0000 UTC m=+57.032247308" observedRunningTime="2025-09-03 23:26:05.661005751 +0000 UTC m=+57.911433276" watchObservedRunningTime="2025-09-03 23:26:05.664647235 +0000 UTC m=+57.915074772" Sep 3 23:26:05.871804 containerd[2014]: time="2025-09-03T23:26:05.871629476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07cedb71cee88d32d175ce0aa6e4c6dc0ae7949e1f2f32719387537b52008a55\" id:\"7769594f109b8f4f9ef75b040d9d2e6efa71c6c7258c984d5d71165be425786e\" pid:5668 exit_status:1 exited_at:{seconds:1756941965 nanos:866922188}" Sep 3 23:26:06.869817 containerd[2014]: time="2025-09-03T23:26:06.869695941Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07cedb71cee88d32d175ce0aa6e4c6dc0ae7949e1f2f32719387537b52008a55\" id:\"40bfe3c93f49133d8e3b8697e19f38ad227040f7b583057486943b57d37d1f17\" pid:5692 exit_status:1 exited_at:{seconds:1756941966 nanos:869317221}" Sep 3 23:26:09.105612 containerd[2014]: time="2025-09-03T23:26:09.104950952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:09.107743 containerd[2014]: time="2025-09-03T23:26:09.107681240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 3 23:26:09.110066 containerd[2014]: time="2025-09-03T23:26:09.110007248Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:09.115988 containerd[2014]: time="2025-09-03T23:26:09.115919084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:09.117389 containerd[2014]: time="2025-09-03T23:26:09.117183200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 4.333426029s" Sep 3 23:26:09.117389 containerd[2014]: time="2025-09-03T23:26:09.117234812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 3 23:26:09.120624 containerd[2014]: time="2025-09-03T23:26:09.120552477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 3 23:26:09.128357 containerd[2014]: time="2025-09-03T23:26:09.128297565Z" level=info msg="CreateContainer within sandbox \"2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 3 23:26:09.149822 containerd[2014]: time="2025-09-03T23:26:09.148952205Z" level=info msg="Container be92d6a3166a07f429d915bd9510871de029e0e24c6ea855dc97ea57f7a3deb6: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:09.169022 containerd[2014]: time="2025-09-03T23:26:09.168955893Z" level=info msg="CreateContainer within sandbox \"2440103da6cb50f611647c723038c401dc17535a1771bf3bde5ff6ef39fd2efc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"be92d6a3166a07f429d915bd9510871de029e0e24c6ea855dc97ea57f7a3deb6\"" Sep 3 23:26:09.170166 containerd[2014]: time="2025-09-03T23:26:09.170076525Z" level=info msg="StartContainer for \"be92d6a3166a07f429d915bd9510871de029e0e24c6ea855dc97ea57f7a3deb6\"" Sep 3 23:26:09.172902 containerd[2014]: time="2025-09-03T23:26:09.172678689Z" level=info msg="connecting to shim be92d6a3166a07f429d915bd9510871de029e0e24c6ea855dc97ea57f7a3deb6" address="unix:///run/containerd/s/ea999b62d0f2e22053b23c4c364eced8bedb4c4ae8ed26f76e74e562ee248631" protocol=ttrpc version=3 Sep 3 23:26:09.221117 systemd[1]: Started cri-containerd-be92d6a3166a07f429d915bd9510871de029e0e24c6ea855dc97ea57f7a3deb6.scope - libcontainer container be92d6a3166a07f429d915bd9510871de029e0e24c6ea855dc97ea57f7a3deb6. Sep 3 23:26:09.311923 containerd[2014]: time="2025-09-03T23:26:09.311863701Z" level=info msg="StartContainer for \"be92d6a3166a07f429d915bd9510871de029e0e24c6ea855dc97ea57f7a3deb6\" returns successfully" Sep 3 23:26:09.681040 kubelet[3309]: I0903 23:26:09.680912 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68c598757d-96rvh" podStartSLOduration=34.735375385 podStartE2EDuration="44.680867651s" podCreationTimestamp="2025-09-03 23:25:25 +0000 UTC" firstStartedPulling="2025-09-03 23:25:59.174052331 +0000 UTC m=+51.424479844" lastFinishedPulling="2025-09-03 23:26:09.119544597 +0000 UTC m=+61.369972110" observedRunningTime="2025-09-03 23:26:09.680323115 +0000 UTC m=+61.930750664" watchObservedRunningTime="2025-09-03 23:26:09.680867651 +0000 UTC m=+61.931295152" Sep 3 23:26:09.982687 containerd[2014]: time="2025-09-03T23:26:09.982616533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07cedb71cee88d32d175ce0aa6e4c6dc0ae7949e1f2f32719387537b52008a55\" id:\"62b824a7e8a8a459ef02f6c43c09ea4a7e4cf9c7dcce69de47c780d57f97f4bc\" pid:5774 exited_at:{seconds:1756941969 nanos:982141873}" Sep 3 23:26:10.478624 containerd[2014]: time="2025-09-03T23:26:10.478570751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:10.482411 containerd[2014]: time="2025-09-03T23:26:10.482345387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 3 23:26:10.484650 containerd[2014]: time="2025-09-03T23:26:10.484552415Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:10.491576 containerd[2014]: time="2025-09-03T23:26:10.491520575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:10.494514 containerd[2014]: time="2025-09-03T23:26:10.494456639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.373820018s" Sep 3 23:26:10.494899 containerd[2014]: time="2025-09-03T23:26:10.494850623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 3 23:26:10.499186 containerd[2014]: time="2025-09-03T23:26:10.498841751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 3 23:26:10.507527 containerd[2014]: time="2025-09-03T23:26:10.507464987Z" level=info msg="CreateContainer within sandbox \"90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 3 23:26:10.532248 containerd[2014]: time="2025-09-03T23:26:10.531846984Z" level=info msg="Container f2431c907123e1646ce80f12c392341bc553b142c253df30641ee89512827da2: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:10.563620 containerd[2014]: time="2025-09-03T23:26:10.563525268Z" level=info msg="CreateContainer within sandbox \"90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f2431c907123e1646ce80f12c392341bc553b142c253df30641ee89512827da2\"" Sep 3 23:26:10.564646 containerd[2014]: time="2025-09-03T23:26:10.564587784Z" level=info msg="StartContainer for \"f2431c907123e1646ce80f12c392341bc553b142c253df30641ee89512827da2\"" Sep 3 23:26:10.571112 containerd[2014]: time="2025-09-03T23:26:10.570912480Z" level=info msg="connecting to shim f2431c907123e1646ce80f12c392341bc553b142c253df30641ee89512827da2" address="unix:///run/containerd/s/c196d9fdf5dc92d0a55159013417073736f6d9b41866f86f0542dadc821132bf" protocol=ttrpc version=3 Sep 3 23:26:10.651536 systemd[1]: Started cri-containerd-f2431c907123e1646ce80f12c392341bc553b142c253df30641ee89512827da2.scope - libcontainer container f2431c907123e1646ce80f12c392341bc553b142c253df30641ee89512827da2. Sep 3 23:26:10.793625 containerd[2014]: time="2025-09-03T23:26:10.793016401Z" level=info msg="StartContainer for \"f2431c907123e1646ce80f12c392341bc553b142c253df30641ee89512827da2\" returns successfully" Sep 3 23:26:10.817164 containerd[2014]: time="2025-09-03T23:26:10.817086481Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:10.821813 containerd[2014]: time="2025-09-03T23:26:10.820203337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 3 23:26:10.827191 containerd[2014]: time="2025-09-03T23:26:10.827130637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 328.20911ms" Sep 3 23:26:10.827465 containerd[2014]: time="2025-09-03T23:26:10.827432881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 3 23:26:10.831138 containerd[2014]: time="2025-09-03T23:26:10.831078889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 3 23:26:10.840226 containerd[2014]: time="2025-09-03T23:26:10.840062533Z" level=info msg="CreateContainer within sandbox \"9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 3 23:26:10.861623 containerd[2014]: time="2025-09-03T23:26:10.861547069Z" level=info msg="Container c3e97a667560b0737bc086f9faa011206402fad4e2d254dd0214c4beb8391acd: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:10.893661 containerd[2014]: time="2025-09-03T23:26:10.892904005Z" level=info msg="CreateContainer within sandbox \"9dba610852d5cf1318a11caf51c15bcb569f54aa845cdda574745cf8670b09fd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c3e97a667560b0737bc086f9faa011206402fad4e2d254dd0214c4beb8391acd\"" Sep 3 23:26:10.895322 containerd[2014]: time="2025-09-03T23:26:10.895228897Z" level=info msg="StartContainer for \"c3e97a667560b0737bc086f9faa011206402fad4e2d254dd0214c4beb8391acd\"" Sep 3 23:26:10.901934 containerd[2014]: time="2025-09-03T23:26:10.901860733Z" level=info msg="connecting to shim c3e97a667560b0737bc086f9faa011206402fad4e2d254dd0214c4beb8391acd" address="unix:///run/containerd/s/ef5101b6dc6e0136a3c4f7f1b75f88f15781704ec78f272dd755dc3c4643eb2d" protocol=ttrpc version=3 Sep 3 23:26:10.964105 systemd[1]: Started cri-containerd-c3e97a667560b0737bc086f9faa011206402fad4e2d254dd0214c4beb8391acd.scope - libcontainer container c3e97a667560b0737bc086f9faa011206402fad4e2d254dd0214c4beb8391acd. Sep 3 23:26:11.107012 containerd[2014]: time="2025-09-03T23:26:11.106764922Z" level=info msg="StartContainer for \"c3e97a667560b0737bc086f9faa011206402fad4e2d254dd0214c4beb8391acd\" returns successfully" Sep 3 23:26:11.702241 kubelet[3309]: I0903 23:26:11.702145 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68c598757d-7xvxl" podStartSLOduration=38.265330635 podStartE2EDuration="46.702122005s" podCreationTimestamp="2025-09-03 23:25:25 +0000 UTC" firstStartedPulling="2025-09-03 23:26:02.392900859 +0000 UTC m=+54.643328372" lastFinishedPulling="2025-09-03 23:26:10.829692241 +0000 UTC m=+63.080119742" observedRunningTime="2025-09-03 23:26:11.700480825 +0000 UTC m=+63.950908362" watchObservedRunningTime="2025-09-03 23:26:11.702122005 +0000 UTC m=+63.952549506" Sep 3 23:26:12.691476 kubelet[3309]: I0903 23:26:12.691420 3309 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 3 23:26:13.491205 systemd[1]: Started sshd@7-172.31.22.232:22-139.178.89.65:55920.service - OpenSSH per-connection server daemon (139.178.89.65:55920). Sep 3 23:26:13.770655 sshd[5867]: Accepted publickey for core from 139.178.89.65 port 55920 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:13.776509 sshd-session[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:13.796941 systemd-logind[1979]: New session 8 of user core. Sep 3 23:26:13.802424 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 3 23:26:14.272034 sshd[5871]: Connection closed by 139.178.89.65 port 55920 Sep 3 23:26:14.272527 sshd-session[5867]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:14.284349 systemd[1]: sshd@7-172.31.22.232:22-139.178.89.65:55920.service: Deactivated successfully. Sep 3 23:26:14.288960 systemd[1]: session-8.scope: Deactivated successfully. Sep 3 23:26:14.293065 systemd-logind[1979]: Session 8 logged out. Waiting for processes to exit. Sep 3 23:26:14.301029 systemd-logind[1979]: Removed session 8. Sep 3 23:26:14.419964 containerd[2014]: time="2025-09-03T23:26:14.419908755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:14.421381 containerd[2014]: time="2025-09-03T23:26:14.421336395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 3 23:26:14.422799 containerd[2014]: time="2025-09-03T23:26:14.422729571Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:14.432108 containerd[2014]: time="2025-09-03T23:26:14.431105451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:14.433174 containerd[2014]: time="2025-09-03T23:26:14.433129755Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 3.60128841s" Sep 3 23:26:14.433328 containerd[2014]: time="2025-09-03T23:26:14.433299267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 3 23:26:14.437668 containerd[2014]: time="2025-09-03T23:26:14.437609043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 3 23:26:14.481942 containerd[2014]: time="2025-09-03T23:26:14.481868199Z" level=info msg="CreateContainer within sandbox \"8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 3 23:26:14.496913 containerd[2014]: time="2025-09-03T23:26:14.496856763Z" level=info msg="Container fdaefb146b97e0956f88eca960ebf77d9c085a1cddda56ea30dfc46fab80a389: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:14.535898 containerd[2014]: time="2025-09-03T23:26:14.535689951Z" level=info msg="CreateContainer within sandbox \"8c0d71dea8edbfea8d40eea8f103fbdc311d9a0559027e8a2df52b45292786e7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fdaefb146b97e0956f88eca960ebf77d9c085a1cddda56ea30dfc46fab80a389\"" Sep 3 23:26:14.538019 containerd[2014]: time="2025-09-03T23:26:14.537939267Z" level=info msg="StartContainer for \"fdaefb146b97e0956f88eca960ebf77d9c085a1cddda56ea30dfc46fab80a389\"" Sep 3 23:26:14.543137 containerd[2014]: time="2025-09-03T23:26:14.542863539Z" level=info msg="connecting to shim fdaefb146b97e0956f88eca960ebf77d9c085a1cddda56ea30dfc46fab80a389" address="unix:///run/containerd/s/dc65840f88e82a6b21e8438fe1b04d922b12eabcb439a5ac4afea3e7cac6ce6e" protocol=ttrpc version=3 Sep 3 23:26:14.597710 systemd[1]: Started cri-containerd-fdaefb146b97e0956f88eca960ebf77d9c085a1cddda56ea30dfc46fab80a389.scope - libcontainer container fdaefb146b97e0956f88eca960ebf77d9c085a1cddda56ea30dfc46fab80a389. Sep 3 23:26:14.745635 containerd[2014]: time="2025-09-03T23:26:14.745574380Z" level=info msg="StartContainer for \"fdaefb146b97e0956f88eca960ebf77d9c085a1cddda56ea30dfc46fab80a389\" returns successfully" Sep 3 23:26:15.766449 kubelet[3309]: I0903 23:26:15.766348 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6987d979c4-jmhf4" podStartSLOduration=28.951292471 podStartE2EDuration="40.766326246s" podCreationTimestamp="2025-09-03 23:25:35 +0000 UTC" firstStartedPulling="2025-09-03 23:26:02.619957264 +0000 UTC m=+54.870384777" lastFinishedPulling="2025-09-03 23:26:14.434991039 +0000 UTC m=+66.685418552" observedRunningTime="2025-09-03 23:26:15.763411506 +0000 UTC m=+68.013839019" watchObservedRunningTime="2025-09-03 23:26:15.766326246 +0000 UTC m=+68.016753759" Sep 3 23:26:16.857088 containerd[2014]: time="2025-09-03T23:26:16.857023063Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fdaefb146b97e0956f88eca960ebf77d9c085a1cddda56ea30dfc46fab80a389\" id:\"4a161b364c0afb94d88016f2a4e8fa0a663564d0769980c3d6e622a74037e5f6\" pid:5940 exited_at:{seconds:1756941976 nanos:854650339}" Sep 3 23:26:17.272730 containerd[2014]: time="2025-09-03T23:26:17.272367353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:17.274880 containerd[2014]: time="2025-09-03T23:26:17.274837205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 3 23:26:17.277254 containerd[2014]: time="2025-09-03T23:26:17.277170305Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:17.281759 containerd[2014]: time="2025-09-03T23:26:17.281675177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:17.284378 containerd[2014]: time="2025-09-03T23:26:17.283477325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 2.845330418s" Sep 3 23:26:17.284378 containerd[2014]: time="2025-09-03T23:26:17.283572485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 3 23:26:17.291881 containerd[2014]: time="2025-09-03T23:26:17.291766481Z" level=info msg="CreateContainer within sandbox \"90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 3 23:26:17.315163 containerd[2014]: time="2025-09-03T23:26:17.313052957Z" level=info msg="Container 6c4b0592916e82821fe4e2f3f4937b1853077f03ca03e78b940cdde7368c3f26: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:17.340901 containerd[2014]: time="2025-09-03T23:26:17.340836149Z" level=info msg="CreateContainer within sandbox \"90556056f7797059cca9dc43c1576a546812f49a6244ad945bd2da10b17a5098\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6c4b0592916e82821fe4e2f3f4937b1853077f03ca03e78b940cdde7368c3f26\"" Sep 3 23:26:17.342038 containerd[2014]: time="2025-09-03T23:26:17.341995625Z" level=info msg="StartContainer for \"6c4b0592916e82821fe4e2f3f4937b1853077f03ca03e78b940cdde7368c3f26\"" Sep 3 23:26:17.349744 containerd[2014]: time="2025-09-03T23:26:17.349113461Z" level=info msg="connecting to shim 6c4b0592916e82821fe4e2f3f4937b1853077f03ca03e78b940cdde7368c3f26" address="unix:///run/containerd/s/c196d9fdf5dc92d0a55159013417073736f6d9b41866f86f0542dadc821132bf" protocol=ttrpc version=3 Sep 3 23:26:17.405069 systemd[1]: Started cri-containerd-6c4b0592916e82821fe4e2f3f4937b1853077f03ca03e78b940cdde7368c3f26.scope - libcontainer container 6c4b0592916e82821fe4e2f3f4937b1853077f03ca03e78b940cdde7368c3f26. Sep 3 23:26:17.503154 containerd[2014]: time="2025-09-03T23:26:17.502906794Z" level=info msg="StartContainer for \"6c4b0592916e82821fe4e2f3f4937b1853077f03ca03e78b940cdde7368c3f26\" returns successfully" Sep 3 23:26:17.761945 kubelet[3309]: I0903 23:26:17.761415 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-w22l4" podStartSLOduration=27.256939874 podStartE2EDuration="42.761389315s" podCreationTimestamp="2025-09-03 23:25:35 +0000 UTC" firstStartedPulling="2025-09-03 23:26:01.78088996 +0000 UTC m=+54.031317473" lastFinishedPulling="2025-09-03 23:26:17.285339365 +0000 UTC m=+69.535766914" observedRunningTime="2025-09-03 23:26:17.760769011 +0000 UTC m=+70.011196764" watchObservedRunningTime="2025-09-03 23:26:17.761389315 +0000 UTC m=+70.011816864" Sep 3 23:26:18.258406 kubelet[3309]: I0903 23:26:18.258208 3309 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 3 23:26:18.258406 kubelet[3309]: I0903 23:26:18.258263 3309 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 3 23:26:19.320337 systemd[1]: Started sshd@8-172.31.22.232:22-139.178.89.65:55930.service - OpenSSH per-connection server daemon (139.178.89.65:55930). Sep 3 23:26:19.536464 sshd[5996]: Accepted publickey for core from 139.178.89.65 port 55930 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:19.538587 sshd-session[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:19.549538 systemd-logind[1979]: New session 9 of user core. Sep 3 23:26:19.558071 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 3 23:26:19.840309 sshd[5998]: Connection closed by 139.178.89.65 port 55930 Sep 3 23:26:19.841188 sshd-session[5996]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:19.848651 systemd-logind[1979]: Session 9 logged out. Waiting for processes to exit. Sep 3 23:26:19.851200 systemd[1]: sshd@8-172.31.22.232:22-139.178.89.65:55930.service: Deactivated successfully. Sep 3 23:26:19.856000 systemd[1]: session-9.scope: Deactivated successfully. Sep 3 23:26:19.859559 systemd-logind[1979]: Removed session 9. Sep 3 23:26:24.387806 kubelet[3309]: I0903 23:26:24.387603 3309 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 3 23:26:24.635922 containerd[2014]: time="2025-09-03T23:26:24.635750762Z" level=info msg="TaskExit event in podsandbox handler container_id:\"691dae3a3f7536d59f269fe32611169838db9ba904e99a37a916e78ff3fd4a48\" id:\"ce216f282bdb7be7b46e3079092fb2e66aeac0061e0f11e52f8f5c0a79697031\" pid:6026 exited_at:{seconds:1756941984 nanos:635316806}" Sep 3 23:26:24.880629 systemd[1]: Started sshd@9-172.31.22.232:22-139.178.89.65:43930.service - OpenSSH per-connection server daemon (139.178.89.65:43930). Sep 3 23:26:25.090056 sshd[6040]: Accepted publickey for core from 139.178.89.65 port 43930 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:25.093986 sshd-session[6040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:25.102091 systemd-logind[1979]: New session 10 of user core. Sep 3 23:26:25.112065 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 3 23:26:25.378838 sshd[6042]: Connection closed by 139.178.89.65 port 43930 Sep 3 23:26:25.378588 sshd-session[6040]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:25.386727 systemd[1]: sshd@9-172.31.22.232:22-139.178.89.65:43930.service: Deactivated successfully. Sep 3 23:26:25.391011 systemd[1]: session-10.scope: Deactivated successfully. Sep 3 23:26:25.395529 systemd-logind[1979]: Session 10 logged out. Waiting for processes to exit. Sep 3 23:26:25.398193 systemd-logind[1979]: Removed session 10. Sep 3 23:26:25.416338 systemd[1]: Started sshd@10-172.31.22.232:22-139.178.89.65:43946.service - OpenSSH per-connection server daemon (139.178.89.65:43946). Sep 3 23:26:25.623463 sshd[6055]: Accepted publickey for core from 139.178.89.65 port 43946 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:25.626490 sshd-session[6055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:25.636215 systemd-logind[1979]: New session 11 of user core. Sep 3 23:26:25.647109 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 3 23:26:26.016163 sshd[6057]: Connection closed by 139.178.89.65 port 43946 Sep 3 23:26:26.016921 sshd-session[6055]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:26.028927 systemd[1]: sshd@10-172.31.22.232:22-139.178.89.65:43946.service: Deactivated successfully. Sep 3 23:26:26.034282 systemd[1]: session-11.scope: Deactivated successfully. Sep 3 23:26:26.040490 systemd-logind[1979]: Session 11 logged out. Waiting for processes to exit. Sep 3 23:26:26.070629 systemd[1]: Started sshd@11-172.31.22.232:22-139.178.89.65:43958.service - OpenSSH per-connection server daemon (139.178.89.65:43958). Sep 3 23:26:26.072917 systemd-logind[1979]: Removed session 11. Sep 3 23:26:26.282950 sshd[6067]: Accepted publickey for core from 139.178.89.65 port 43958 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:26.285149 sshd-session[6067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:26.293875 systemd-logind[1979]: New session 12 of user core. Sep 3 23:26:26.302064 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 3 23:26:26.578529 sshd[6069]: Connection closed by 139.178.89.65 port 43958 Sep 3 23:26:26.578325 sshd-session[6067]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:26.586874 systemd[1]: sshd@11-172.31.22.232:22-139.178.89.65:43958.service: Deactivated successfully. Sep 3 23:26:26.592154 systemd[1]: session-12.scope: Deactivated successfully. Sep 3 23:26:26.594822 systemd-logind[1979]: Session 12 logged out. Waiting for processes to exit. Sep 3 23:26:26.599133 systemd-logind[1979]: Removed session 12. Sep 3 23:26:31.616470 systemd[1]: Started sshd@12-172.31.22.232:22-139.178.89.65:39084.service - OpenSSH per-connection server daemon (139.178.89.65:39084). Sep 3 23:26:31.813683 sshd[6085]: Accepted publickey for core from 139.178.89.65 port 39084 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:31.815583 sshd-session[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:31.824728 systemd-logind[1979]: New session 13 of user core. Sep 3 23:26:31.835076 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 3 23:26:32.099477 sshd[6088]: Connection closed by 139.178.89.65 port 39084 Sep 3 23:26:32.100350 sshd-session[6085]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:32.107559 systemd[1]: sshd@12-172.31.22.232:22-139.178.89.65:39084.service: Deactivated successfully. Sep 3 23:26:32.113079 systemd[1]: session-13.scope: Deactivated successfully. Sep 3 23:26:32.115943 systemd-logind[1979]: Session 13 logged out. Waiting for processes to exit. Sep 3 23:26:32.119273 systemd-logind[1979]: Removed session 13. Sep 3 23:26:36.789694 containerd[2014]: time="2025-09-03T23:26:36.789602054Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07cedb71cee88d32d175ce0aa6e4c6dc0ae7949e1f2f32719387537b52008a55\" id:\"90968cbbeab02886ba0e8e555680241a4c37e7c050946cbd06b561a0b2847fe6\" pid:6112 exited_at:{seconds:1756941996 nanos:786976034}" Sep 3 23:26:37.158091 systemd[1]: Started sshd@13-172.31.22.232:22-139.178.89.65:39086.service - OpenSSH per-connection server daemon (139.178.89.65:39086). Sep 3 23:26:37.404448 sshd[6124]: Accepted publickey for core from 139.178.89.65 port 39086 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:37.409016 sshd-session[6124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:37.422552 systemd-logind[1979]: New session 14 of user core. Sep 3 23:26:37.432967 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 3 23:26:37.778005 sshd[6132]: Connection closed by 139.178.89.65 port 39086 Sep 3 23:26:37.779175 sshd-session[6124]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:37.788906 systemd-logind[1979]: Session 14 logged out. Waiting for processes to exit. Sep 3 23:26:37.789768 systemd[1]: sshd@13-172.31.22.232:22-139.178.89.65:39086.service: Deactivated successfully. Sep 3 23:26:37.795356 systemd[1]: session-14.scope: Deactivated successfully. Sep 3 23:26:37.801876 systemd-logind[1979]: Removed session 14. Sep 3 23:26:42.824844 systemd[1]: Started sshd@14-172.31.22.232:22-139.178.89.65:59148.service - OpenSSH per-connection server daemon (139.178.89.65:59148). Sep 3 23:26:43.018284 sshd[6146]: Accepted publickey for core from 139.178.89.65 port 59148 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:43.020003 sshd-session[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:43.031889 systemd-logind[1979]: New session 15 of user core. Sep 3 23:26:43.042158 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 3 23:26:43.318009 sshd[6150]: Connection closed by 139.178.89.65 port 59148 Sep 3 23:26:43.318860 sshd-session[6146]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:43.327127 systemd[1]: sshd@14-172.31.22.232:22-139.178.89.65:59148.service: Deactivated successfully. Sep 3 23:26:43.331657 systemd[1]: session-15.scope: Deactivated successfully. Sep 3 23:26:43.335737 systemd-logind[1979]: Session 15 logged out. Waiting for processes to exit. Sep 3 23:26:43.339064 systemd-logind[1979]: Removed session 15. Sep 3 23:26:46.414745 containerd[2014]: time="2025-09-03T23:26:46.414607786Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fdaefb146b97e0956f88eca960ebf77d9c085a1cddda56ea30dfc46fab80a389\" id:\"6f41bde29564f916a774c68d4dd72e1d71b87040cb812313171288402fb066d1\" pid:6173 exited_at:{seconds:1756942006 nanos:414103762}" Sep 3 23:26:46.797508 containerd[2014]: time="2025-09-03T23:26:46.796017984Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fdaefb146b97e0956f88eca960ebf77d9c085a1cddda56ea30dfc46fab80a389\" id:\"aa74e69689df9d0dc3c3a69f53e622c2e0b13463726a20a1b240c6ae16f8353d\" pid:6197 exited_at:{seconds:1756942006 nanos:795598656}" Sep 3 23:26:48.360701 systemd[1]: Started sshd@15-172.31.22.232:22-139.178.89.65:59164.service - OpenSSH per-connection server daemon (139.178.89.65:59164). Sep 3 23:26:48.571582 sshd[6208]: Accepted publickey for core from 139.178.89.65 port 59164 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:48.574248 sshd-session[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:48.582505 systemd-logind[1979]: New session 16 of user core. Sep 3 23:26:48.592061 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 3 23:26:48.864043 sshd[6210]: Connection closed by 139.178.89.65 port 59164 Sep 3 23:26:48.865340 sshd-session[6208]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:48.873069 systemd[1]: sshd@15-172.31.22.232:22-139.178.89.65:59164.service: Deactivated successfully. Sep 3 23:26:48.879244 systemd[1]: session-16.scope: Deactivated successfully. Sep 3 23:26:48.882933 systemd-logind[1979]: Session 16 logged out. Waiting for processes to exit. Sep 3 23:26:48.903492 systemd[1]: Started sshd@16-172.31.22.232:22-139.178.89.65:59180.service - OpenSSH per-connection server daemon (139.178.89.65:59180). Sep 3 23:26:48.906507 systemd-logind[1979]: Removed session 16. Sep 3 23:26:49.111140 sshd[6222]: Accepted publickey for core from 139.178.89.65 port 59180 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:49.115089 sshd-session[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:49.129758 systemd-logind[1979]: New session 17 of user core. Sep 3 23:26:49.136169 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 3 23:26:49.721662 sshd[6224]: Connection closed by 139.178.89.65 port 59180 Sep 3 23:26:49.723067 sshd-session[6222]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:49.729450 systemd[1]: sshd@16-172.31.22.232:22-139.178.89.65:59180.service: Deactivated successfully. Sep 3 23:26:49.730097 systemd-logind[1979]: Session 17 logged out. Waiting for processes to exit. Sep 3 23:26:49.734072 systemd[1]: session-17.scope: Deactivated successfully. Sep 3 23:26:49.738901 systemd-logind[1979]: Removed session 17. Sep 3 23:26:49.764234 systemd[1]: Started sshd@17-172.31.22.232:22-139.178.89.65:59190.service - OpenSSH per-connection server daemon (139.178.89.65:59190). Sep 3 23:26:49.970200 sshd[6234]: Accepted publickey for core from 139.178.89.65 port 59190 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:49.973022 sshd-session[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:49.982957 systemd-logind[1979]: New session 18 of user core. Sep 3 23:26:49.991105 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 3 23:26:51.155710 sshd[6236]: Connection closed by 139.178.89.65 port 59190 Sep 3 23:26:51.155504 sshd-session[6234]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:51.165156 systemd[1]: sshd@17-172.31.22.232:22-139.178.89.65:59190.service: Deactivated successfully. Sep 3 23:26:51.175409 systemd[1]: session-18.scope: Deactivated successfully. Sep 3 23:26:51.181208 systemd-logind[1979]: Session 18 logged out. Waiting for processes to exit. Sep 3 23:26:51.217249 systemd[1]: Started sshd@18-172.31.22.232:22-139.178.89.65:49784.service - OpenSSH per-connection server daemon (139.178.89.65:49784). Sep 3 23:26:51.219882 systemd-logind[1979]: Removed session 18. Sep 3 23:26:51.426755 sshd[6252]: Accepted publickey for core from 139.178.89.65 port 49784 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:51.429665 sshd-session[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:51.438966 systemd-logind[1979]: New session 19 of user core. Sep 3 23:26:51.453092 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 3 23:26:52.020814 sshd[6255]: Connection closed by 139.178.89.65 port 49784 Sep 3 23:26:52.019719 sshd-session[6252]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:52.029517 systemd[1]: sshd@18-172.31.22.232:22-139.178.89.65:49784.service: Deactivated successfully. Sep 3 23:26:52.034937 systemd[1]: session-19.scope: Deactivated successfully. Sep 3 23:26:52.038273 systemd-logind[1979]: Session 19 logged out. Waiting for processes to exit. Sep 3 23:26:52.055622 systemd-logind[1979]: Removed session 19. Sep 3 23:26:52.058549 systemd[1]: Started sshd@19-172.31.22.232:22-139.178.89.65:49790.service - OpenSSH per-connection server daemon (139.178.89.65:49790). Sep 3 23:26:52.256995 sshd[6265]: Accepted publickey for core from 139.178.89.65 port 49790 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:52.260030 sshd-session[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:52.268304 systemd-logind[1979]: New session 20 of user core. Sep 3 23:26:52.277047 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 3 23:26:52.525920 sshd[6267]: Connection closed by 139.178.89.65 port 49790 Sep 3 23:26:52.526708 sshd-session[6265]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:52.535066 systemd-logind[1979]: Session 20 logged out. Waiting for processes to exit. Sep 3 23:26:52.536260 systemd[1]: sshd@19-172.31.22.232:22-139.178.89.65:49790.service: Deactivated successfully. Sep 3 23:26:52.542814 systemd[1]: session-20.scope: Deactivated successfully. Sep 3 23:26:52.548590 systemd-logind[1979]: Removed session 20. Sep 3 23:26:54.559883 containerd[2014]: time="2025-09-03T23:26:54.559745118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"691dae3a3f7536d59f269fe32611169838db9ba904e99a37a916e78ff3fd4a48\" id:\"e2abd1e28dc841284b56355cb3a7ca5240f9002c444199118ad338c83518b32b\" pid:6290 exited_at:{seconds:1756942014 nanos:559025058}" Sep 3 23:26:57.565911 systemd[1]: Started sshd@20-172.31.22.232:22-139.178.89.65:49798.service - OpenSSH per-connection server daemon (139.178.89.65:49798). Sep 3 23:26:57.771443 sshd[6304]: Accepted publickey for core from 139.178.89.65 port 49798 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:57.776077 sshd-session[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:57.784451 systemd-logind[1979]: New session 21 of user core. Sep 3 23:26:57.792097 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 3 23:26:58.055486 sshd[6308]: Connection closed by 139.178.89.65 port 49798 Sep 3 23:26:58.055350 sshd-session[6304]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:58.063470 systemd[1]: sshd@20-172.31.22.232:22-139.178.89.65:49798.service: Deactivated successfully. Sep 3 23:26:58.067656 systemd[1]: session-21.scope: Deactivated successfully. Sep 3 23:26:58.071922 systemd-logind[1979]: Session 21 logged out. Waiting for processes to exit. Sep 3 23:26:58.074963 systemd-logind[1979]: Removed session 21. Sep 3 23:27:03.094814 systemd[1]: Started sshd@21-172.31.22.232:22-139.178.89.65:39502.service - OpenSSH per-connection server daemon (139.178.89.65:39502). Sep 3 23:27:03.314851 sshd[6322]: Accepted publickey for core from 139.178.89.65 port 39502 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:27:03.319232 sshd-session[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:03.335062 systemd-logind[1979]: New session 22 of user core. Sep 3 23:27:03.341005 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 3 23:27:03.693642 sshd[6324]: Connection closed by 139.178.89.65 port 39502 Sep 3 23:27:03.695094 sshd-session[6322]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:03.706106 systemd[1]: sshd@21-172.31.22.232:22-139.178.89.65:39502.service: Deactivated successfully. Sep 3 23:27:03.713856 systemd[1]: session-22.scope: Deactivated successfully. Sep 3 23:27:03.718246 systemd-logind[1979]: Session 22 logged out. Waiting for processes to exit. Sep 3 23:27:03.725845 systemd-logind[1979]: Removed session 22. Sep 3 23:27:06.824799 containerd[2014]: time="2025-09-03T23:27:06.824677111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07cedb71cee88d32d175ce0aa6e4c6dc0ae7949e1f2f32719387537b52008a55\" id:\"8cd368783943125f9643f6bff2f9da64cab069346229ef7467ebf9d82b86c7e5\" pid:6347 exited_at:{seconds:1756942026 nanos:824213323}" Sep 3 23:27:08.734248 systemd[1]: Started sshd@22-172.31.22.232:22-139.178.89.65:39504.service - OpenSSH per-connection server daemon (139.178.89.65:39504). Sep 3 23:27:08.942679 sshd[6361]: Accepted publickey for core from 139.178.89.65 port 39504 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:27:08.944394 sshd-session[6361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:08.957981 systemd-logind[1979]: New session 23 of user core. Sep 3 23:27:08.964120 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 3 23:27:09.258307 sshd[6363]: Connection closed by 139.178.89.65 port 39504 Sep 3 23:27:09.259955 sshd-session[6361]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:09.276298 systemd[1]: sshd@22-172.31.22.232:22-139.178.89.65:39504.service: Deactivated successfully. Sep 3 23:27:09.284224 systemd[1]: session-23.scope: Deactivated successfully. Sep 3 23:27:09.290167 systemd-logind[1979]: Session 23 logged out. Waiting for processes to exit. Sep 3 23:27:09.297121 systemd-logind[1979]: Removed session 23. Sep 3 23:27:09.997431 containerd[2014]: time="2025-09-03T23:27:09.997367123Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07cedb71cee88d32d175ce0aa6e4c6dc0ae7949e1f2f32719387537b52008a55\" id:\"685030ab9b157f0022e0359a240442f2371723b2219a77f0c806ee63ff2e524f\" pid:6386 exited_at:{seconds:1756942029 nanos:996885935}" Sep 3 23:27:14.302214 systemd[1]: Started sshd@23-172.31.22.232:22-139.178.89.65:50150.service - OpenSSH per-connection server daemon (139.178.89.65:50150). Sep 3 23:27:14.511971 sshd[6399]: Accepted publickey for core from 139.178.89.65 port 50150 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:27:14.515529 sshd-session[6399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:14.525620 systemd-logind[1979]: New session 24 of user core. Sep 3 23:27:14.534077 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 3 23:27:14.849863 sshd[6401]: Connection closed by 139.178.89.65 port 50150 Sep 3 23:27:14.852081 sshd-session[6399]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:14.860727 systemd[1]: sshd@23-172.31.22.232:22-139.178.89.65:50150.service: Deactivated successfully. Sep 3 23:27:14.868503 systemd[1]: session-24.scope: Deactivated successfully. Sep 3 23:27:14.877393 systemd-logind[1979]: Session 24 logged out. Waiting for processes to exit. Sep 3 23:27:14.880919 systemd-logind[1979]: Removed session 24. Sep 3 23:27:16.830077 containerd[2014]: time="2025-09-03T23:27:16.829976645Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fdaefb146b97e0956f88eca960ebf77d9c085a1cddda56ea30dfc46fab80a389\" id:\"ba073b2f3a051f234c3e512109b00b40216502d80a80a96bda9ca6805392c366\" pid:6424 exited_at:{seconds:1756942036 nanos:829040573}" Sep 3 23:27:19.892435 systemd[1]: Started sshd@24-172.31.22.232:22-139.178.89.65:50166.service - OpenSSH per-connection server daemon (139.178.89.65:50166). Sep 3 23:27:20.111563 sshd[6439]: Accepted publickey for core from 139.178.89.65 port 50166 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:27:20.116498 sshd-session[6439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:20.129112 systemd-logind[1979]: New session 25 of user core. Sep 3 23:27:20.136072 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 3 23:27:20.452066 sshd[6441]: Connection closed by 139.178.89.65 port 50166 Sep 3 23:27:20.451941 sshd-session[6439]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:20.463498 systemd[1]: sshd@24-172.31.22.232:22-139.178.89.65:50166.service: Deactivated successfully. Sep 3 23:27:20.467650 systemd[1]: session-25.scope: Deactivated successfully. Sep 3 23:27:20.473004 systemd-logind[1979]: Session 25 logged out. Waiting for processes to exit. Sep 3 23:27:20.478276 systemd-logind[1979]: Removed session 25. Sep 3 23:27:21.449812 update_engine[1983]: I20250903 23:27:21.447934 1983 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 3 23:27:21.449812 update_engine[1983]: I20250903 23:27:21.448011 1983 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 3 23:27:21.449812 update_engine[1983]: I20250903 23:27:21.448487 1983 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 3 23:27:21.449812 update_engine[1983]: I20250903 23:27:21.449306 1983 omaha_request_params.cc:62] Current group set to beta Sep 3 23:27:21.449812 update_engine[1983]: I20250903 23:27:21.449476 1983 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 3 23:27:21.449812 update_engine[1983]: I20250903 23:27:21.449498 1983 update_attempter.cc:643] Scheduling an action processor start. Sep 3 23:27:21.449812 update_engine[1983]: I20250903 23:27:21.449532 1983 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 3 23:27:21.449812 update_engine[1983]: I20250903 23:27:21.449584 1983 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 3 23:27:21.449812 update_engine[1983]: I20250903 23:27:21.449684 1983 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 3 23:27:21.449812 update_engine[1983]: I20250903 23:27:21.449715 1983 omaha_request_action.cc:272] Request: Sep 3 23:27:21.449812 update_engine[1983]: Sep 3 23:27:21.449812 update_engine[1983]: Sep 3 23:27:21.449812 update_engine[1983]: Sep 3 23:27:21.449812 update_engine[1983]: Sep 3 23:27:21.449812 update_engine[1983]: Sep 3 23:27:21.449812 update_engine[1983]: Sep 3 23:27:21.449812 update_engine[1983]: Sep 3 23:27:21.449812 update_engine[1983]: Sep 3 23:27:21.449812 update_engine[1983]: I20250903 23:27:21.449736 1983 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 3 23:27:21.464256 locksmithd[2025]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 3 23:27:21.465877 update_engine[1983]: I20250903 23:27:21.465370 1983 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 3 23:27:21.466431 update_engine[1983]: I20250903 23:27:21.466286 1983 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 3 23:27:21.499523 update_engine[1983]: E20250903 23:27:21.499456 1983 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 3 23:27:21.499845 update_engine[1983]: I20250903 23:27:21.499773 1983 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 3 23:27:24.625121 containerd[2014]: time="2025-09-03T23:27:24.625060464Z" level=info msg="TaskExit event in podsandbox handler container_id:\"691dae3a3f7536d59f269fe32611169838db9ba904e99a37a916e78ff3fd4a48\" id:\"c043b17d91fc2e4f440a614522bc6fc71074cb119777bf60c2e82964637dc774\" pid:6468 exited_at:{seconds:1756942044 nanos:624441732}" Sep 3 23:27:25.493147 systemd[1]: Started sshd@25-172.31.22.232:22-139.178.89.65:52892.service - OpenSSH per-connection server daemon (139.178.89.65:52892). Sep 3 23:27:25.711411 sshd[6480]: Accepted publickey for core from 139.178.89.65 port 52892 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:27:25.714200 sshd-session[6480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:25.725382 systemd-logind[1979]: New session 26 of user core. Sep 3 23:27:25.732090 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 3 23:27:26.024335 sshd[6482]: Connection closed by 139.178.89.65 port 52892 Sep 3 23:27:26.025191 sshd-session[6480]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:26.033537 systemd[1]: sshd@25-172.31.22.232:22-139.178.89.65:52892.service: Deactivated successfully. Sep 3 23:27:26.044414 systemd[1]: session-26.scope: Deactivated successfully. Sep 3 23:27:26.049449 systemd-logind[1979]: Session 26 logged out. Waiting for processes to exit. Sep 3 23:27:26.053185 systemd-logind[1979]: Removed session 26. Sep 3 23:27:31.451545 update_engine[1983]: I20250903 23:27:31.451370 1983 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 3 23:27:31.452109 update_engine[1983]: I20250903 23:27:31.451838 1983 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 3 23:27:31.452289 update_engine[1983]: I20250903 23:27:31.452229 1983 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 3 23:27:31.456891 update_engine[1983]: E20250903 23:27:31.456823 1983 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 3 23:27:31.457004 update_engine[1983]: I20250903 23:27:31.456918 1983 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 3 23:27:36.761844 containerd[2014]: time="2025-09-03T23:27:36.761729628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07cedb71cee88d32d175ce0aa6e4c6dc0ae7949e1f2f32719387537b52008a55\" id:\"b2a8eb31de84e0718c971faeb7d14ca311c7b82d33434ec9c573e948c61754bf\" pid:6526 exited_at:{seconds:1756942056 nanos:761189676}" Sep 3 23:27:40.689218 systemd[1]: cri-containerd-b8f2ead548abeba8121c8814e75a2cf02b059fcc90d3bd480117b9005ae9e935.scope: Deactivated successfully. Sep 3 23:27:40.692514 systemd[1]: cri-containerd-b8f2ead548abeba8121c8814e75a2cf02b059fcc90d3bd480117b9005ae9e935.scope: Consumed 4.678s CPU time, 57.5M memory peak, 64K read from disk. Sep 3 23:27:40.703176 containerd[2014]: time="2025-09-03T23:27:40.703106763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8f2ead548abeba8121c8814e75a2cf02b059fcc90d3bd480117b9005ae9e935\" id:\"b8f2ead548abeba8121c8814e75a2cf02b059fcc90d3bd480117b9005ae9e935\" pid:3154 exit_status:1 exited_at:{seconds:1756942060 nanos:702498519}" Sep 3 23:27:40.704564 containerd[2014]: time="2025-09-03T23:27:40.703202403Z" level=info msg="received exit event container_id:\"b8f2ead548abeba8121c8814e75a2cf02b059fcc90d3bd480117b9005ae9e935\" id:\"b8f2ead548abeba8121c8814e75a2cf02b059fcc90d3bd480117b9005ae9e935\" pid:3154 exit_status:1 exited_at:{seconds:1756942060 nanos:702498519}" Sep 3 23:27:40.739580 systemd[1]: cri-containerd-39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f.scope: Deactivated successfully. Sep 3 23:27:40.740761 systemd[1]: cri-containerd-39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f.scope: Consumed 23.464s CPU time, 99.1M memory peak, 544K read from disk. Sep 3 23:27:40.749175 containerd[2014]: time="2025-09-03T23:27:40.747754516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f\" id:\"39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f\" pid:3904 exit_status:1 exited_at:{seconds:1756942060 nanos:745397260}" Sep 3 23:27:40.749518 containerd[2014]: time="2025-09-03T23:27:40.748125844Z" level=info msg="received exit event container_id:\"39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f\" id:\"39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f\" pid:3904 exit_status:1 exited_at:{seconds:1756942060 nanos:745397260}" Sep 3 23:27:40.779448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8f2ead548abeba8121c8814e75a2cf02b059fcc90d3bd480117b9005ae9e935-rootfs.mount: Deactivated successfully. Sep 3 23:27:40.812860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f-rootfs.mount: Deactivated successfully. Sep 3 23:27:41.058752 kubelet[3309]: I0903 23:27:41.058419 3309 scope.go:117] "RemoveContainer" containerID="b8f2ead548abeba8121c8814e75a2cf02b059fcc90d3bd480117b9005ae9e935" Sep 3 23:27:41.062396 kubelet[3309]: I0903 23:27:41.062339 3309 scope.go:117] "RemoveContainer" containerID="39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f" Sep 3 23:27:41.071707 containerd[2014]: time="2025-09-03T23:27:41.071657749Z" level=info msg="CreateContainer within sandbox \"356a2acdd291b55eedc7652d489bdaaf8079814ab27497da82ab8ff7ed60be4b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 3 23:27:41.072833 containerd[2014]: time="2025-09-03T23:27:41.072755269Z" level=info msg="CreateContainer within sandbox \"7241c93c49c0ef94568a0659899c8329cf5ecfee16372be7f2c3dc1f0c1edf28\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 3 23:27:41.092724 containerd[2014]: time="2025-09-03T23:27:41.092640385Z" level=info msg="Container 3c7db443a351dd3c3f008b891e31e73e2efcedc7d4f0adeaae07c9f75ee6f1fc: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:27:41.104657 containerd[2014]: time="2025-09-03T23:27:41.104583061Z" level=info msg="Container ebd4aab3a5fb788157d72875c67b7fbd8c6463fb70def7e37f30df4548661e7b: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:27:41.115324 containerd[2014]: time="2025-09-03T23:27:41.115258165Z" level=info msg="CreateContainer within sandbox \"356a2acdd291b55eedc7652d489bdaaf8079814ab27497da82ab8ff7ed60be4b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"3c7db443a351dd3c3f008b891e31e73e2efcedc7d4f0adeaae07c9f75ee6f1fc\"" Sep 3 23:27:41.117666 containerd[2014]: time="2025-09-03T23:27:41.117544813Z" level=info msg="StartContainer for \"3c7db443a351dd3c3f008b891e31e73e2efcedc7d4f0adeaae07c9f75ee6f1fc\"" Sep 3 23:27:41.121261 containerd[2014]: time="2025-09-03T23:27:41.121129021Z" level=info msg="connecting to shim 3c7db443a351dd3c3f008b891e31e73e2efcedc7d4f0adeaae07c9f75ee6f1fc" address="unix:///run/containerd/s/9c0df559a50606cc6b7f64c6ee86a045b62a48747bbc379e1c6d6cf25cdf751d" protocol=ttrpc version=3 Sep 3 23:27:41.133162 containerd[2014]: time="2025-09-03T23:27:41.132999566Z" level=info msg="CreateContainer within sandbox \"7241c93c49c0ef94568a0659899c8329cf5ecfee16372be7f2c3dc1f0c1edf28\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ebd4aab3a5fb788157d72875c67b7fbd8c6463fb70def7e37f30df4548661e7b\"" Sep 3 23:27:41.134375 containerd[2014]: time="2025-09-03T23:27:41.134051282Z" level=info msg="StartContainer for \"ebd4aab3a5fb788157d72875c67b7fbd8c6463fb70def7e37f30df4548661e7b\"" Sep 3 23:27:41.139473 kubelet[3309]: E0903 23:27:41.139165 3309 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-232?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 3 23:27:41.141130 containerd[2014]: time="2025-09-03T23:27:41.141041054Z" level=info msg="connecting to shim ebd4aab3a5fb788157d72875c67b7fbd8c6463fb70def7e37f30df4548661e7b" address="unix:///run/containerd/s/c40f5c0857c1d9166389563ec17ab47be8b71da4bacd71a7a373803347276ebf" protocol=ttrpc version=3 Sep 3 23:27:41.168233 systemd[1]: Started cri-containerd-3c7db443a351dd3c3f008b891e31e73e2efcedc7d4f0adeaae07c9f75ee6f1fc.scope - libcontainer container 3c7db443a351dd3c3f008b891e31e73e2efcedc7d4f0adeaae07c9f75ee6f1fc. Sep 3 23:27:41.200144 systemd[1]: Started cri-containerd-ebd4aab3a5fb788157d72875c67b7fbd8c6463fb70def7e37f30df4548661e7b.scope - libcontainer container ebd4aab3a5fb788157d72875c67b7fbd8c6463fb70def7e37f30df4548661e7b. Sep 3 23:27:41.275961 containerd[2014]: time="2025-09-03T23:27:41.275901122Z" level=info msg="StartContainer for \"3c7db443a351dd3c3f008b891e31e73e2efcedc7d4f0adeaae07c9f75ee6f1fc\" returns successfully" Sep 3 23:27:41.323753 containerd[2014]: time="2025-09-03T23:27:41.323235302Z" level=info msg="StartContainer for \"ebd4aab3a5fb788157d72875c67b7fbd8c6463fb70def7e37f30df4548661e7b\" returns successfully" Sep 3 23:27:41.448071 update_engine[1983]: I20250903 23:27:41.447986 1983 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 3 23:27:41.448608 update_engine[1983]: I20250903 23:27:41.448339 1983 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 3 23:27:41.448904 update_engine[1983]: I20250903 23:27:41.448758 1983 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 3 23:27:41.450136 update_engine[1983]: E20250903 23:27:41.450018 1983 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 3 23:27:41.450136 update_engine[1983]: I20250903 23:27:41.450095 1983 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 3 23:27:46.033670 systemd[1]: cri-containerd-92b052aa8faad824c369982c0ca8c6dfac0ef933f8ff9b2b1bfaa54bae065f78.scope: Deactivated successfully. Sep 3 23:27:46.035260 systemd[1]: cri-containerd-92b052aa8faad824c369982c0ca8c6dfac0ef933f8ff9b2b1bfaa54bae065f78.scope: Consumed 5.963s CPU time, 23M memory peak, 128K read from disk. Sep 3 23:27:46.044587 containerd[2014]: time="2025-09-03T23:27:46.044399838Z" level=info msg="received exit event container_id:\"92b052aa8faad824c369982c0ca8c6dfac0ef933f8ff9b2b1bfaa54bae065f78\" id:\"92b052aa8faad824c369982c0ca8c6dfac0ef933f8ff9b2b1bfaa54bae065f78\" pid:3134 exit_status:1 exited_at:{seconds:1756942066 nanos:39971502}" Sep 3 23:27:46.045722 containerd[2014]: time="2025-09-03T23:27:46.045478098Z" level=info msg="TaskExit event in podsandbox handler container_id:\"92b052aa8faad824c369982c0ca8c6dfac0ef933f8ff9b2b1bfaa54bae065f78\" id:\"92b052aa8faad824c369982c0ca8c6dfac0ef933f8ff9b2b1bfaa54bae065f78\" pid:3134 exit_status:1 exited_at:{seconds:1756942066 nanos:39971502}" Sep 3 23:27:46.111762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92b052aa8faad824c369982c0ca8c6dfac0ef933f8ff9b2b1bfaa54bae065f78-rootfs.mount: Deactivated successfully. Sep 3 23:27:46.443471 containerd[2014]: time="2025-09-03T23:27:46.443157416Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fdaefb146b97e0956f88eca960ebf77d9c085a1cddda56ea30dfc46fab80a389\" id:\"955467301670588d93d72bb63dd1cf9509f501dc22090ca8ac3308ae4577ee90\" pid:6649 exit_status:1 exited_at:{seconds:1756942066 nanos:442682960}" Sep 3 23:27:46.796878 containerd[2014]: time="2025-09-03T23:27:46.796676422Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fdaefb146b97e0956f88eca960ebf77d9c085a1cddda56ea30dfc46fab80a389\" id:\"627d355a36bf0c9236d8d5dc12dd8ba107d79504ab75e3473ea5be2160d03464\" pid:6672 exit_status:1 exited_at:{seconds:1756942066 nanos:795966478}" Sep 3 23:27:47.108287 kubelet[3309]: I0903 23:27:47.107054 3309 scope.go:117] "RemoveContainer" containerID="92b052aa8faad824c369982c0ca8c6dfac0ef933f8ff9b2b1bfaa54bae065f78" Sep 3 23:27:47.117285 containerd[2014]: time="2025-09-03T23:27:47.117185827Z" level=info msg="CreateContainer within sandbox \"ad8302d23811779cb9933b1b3b62f5f7b68d322cffcb7bd22f7020942f52ea9a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 3 23:27:47.146889 containerd[2014]: time="2025-09-03T23:27:47.146820235Z" level=info msg="Container 95ab1f54ad1b19d98d71d23fcb1c8a625821e468df741eb4c08c4db54a53b1af: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:27:47.159043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3186391620.mount: Deactivated successfully. Sep 3 23:27:47.169709 containerd[2014]: time="2025-09-03T23:27:47.169657460Z" level=info msg="CreateContainer within sandbox \"ad8302d23811779cb9933b1b3b62f5f7b68d322cffcb7bd22f7020942f52ea9a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"95ab1f54ad1b19d98d71d23fcb1c8a625821e468df741eb4c08c4db54a53b1af\"" Sep 3 23:27:47.172846 containerd[2014]: time="2025-09-03T23:27:47.170829572Z" level=info msg="StartContainer for \"95ab1f54ad1b19d98d71d23fcb1c8a625821e468df741eb4c08c4db54a53b1af\"" Sep 3 23:27:47.173144 containerd[2014]: time="2025-09-03T23:27:47.173100896Z" level=info msg="connecting to shim 95ab1f54ad1b19d98d71d23fcb1c8a625821e468df741eb4c08c4db54a53b1af" address="unix:///run/containerd/s/39b6d0581fddd6d2c7b1fed74e1c9efbdea3c0e0f335637f35516855423d8700" protocol=ttrpc version=3 Sep 3 23:27:47.223108 systemd[1]: Started cri-containerd-95ab1f54ad1b19d98d71d23fcb1c8a625821e468df741eb4c08c4db54a53b1af.scope - libcontainer container 95ab1f54ad1b19d98d71d23fcb1c8a625821e468df741eb4c08c4db54a53b1af. Sep 3 23:27:47.307807 containerd[2014]: time="2025-09-03T23:27:47.307734584Z" level=info msg="StartContainer for \"95ab1f54ad1b19d98d71d23fcb1c8a625821e468df741eb4c08c4db54a53b1af\" returns successfully" Sep 3 23:27:51.141525 kubelet[3309]: E0903 23:27:51.141466 3309 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-22-232)" Sep 3 23:27:51.450714 update_engine[1983]: I20250903 23:27:51.450615 1983 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 3 23:27:51.451312 update_engine[1983]: I20250903 23:27:51.451067 1983 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 3 23:27:51.451476 update_engine[1983]: I20250903 23:27:51.451426 1983 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 3 23:27:51.452673 update_engine[1983]: E20250903 23:27:51.452622 1983 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 3 23:27:51.452880 update_engine[1983]: I20250903 23:27:51.452701 1983 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 3 23:27:51.452880 update_engine[1983]: I20250903 23:27:51.452723 1983 omaha_request_action.cc:617] Omaha request response: Sep 3 23:27:51.452880 update_engine[1983]: E20250903 23:27:51.452862 1983 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 3 23:27:51.453042 update_engine[1983]: I20250903 23:27:51.452903 1983 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 3 23:27:51.453042 update_engine[1983]: I20250903 23:27:51.452919 1983 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 3 23:27:51.453042 update_engine[1983]: I20250903 23:27:51.452934 1983 update_attempter.cc:306] Processing Done. Sep 3 23:27:51.453042 update_engine[1983]: E20250903 23:27:51.452958 1983 update_attempter.cc:619] Update failed. Sep 3 23:27:51.453042 update_engine[1983]: I20250903 23:27:51.452975 1983 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 3 23:27:51.453042 update_engine[1983]: I20250903 23:27:51.452989 1983 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 3 23:27:51.453042 update_engine[1983]: I20250903 23:27:51.453006 1983 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 3 23:27:51.453342 update_engine[1983]: I20250903 23:27:51.453109 1983 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 3 23:27:51.453342 update_engine[1983]: I20250903 23:27:51.453146 1983 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 3 23:27:51.453342 update_engine[1983]: I20250903 23:27:51.453163 1983 omaha_request_action.cc:272] Request: Sep 3 23:27:51.453342 update_engine[1983]: Sep 3 23:27:51.453342 update_engine[1983]: Sep 3 23:27:51.453342 update_engine[1983]: Sep 3 23:27:51.453342 update_engine[1983]: Sep 3 23:27:51.453342 update_engine[1983]: Sep 3 23:27:51.453342 update_engine[1983]: Sep 3 23:27:51.453342 update_engine[1983]: I20250903 23:27:51.453178 1983 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 3 23:27:51.453885 update_engine[1983]: I20250903 23:27:51.453442 1983 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 3 23:27:51.454122 update_engine[1983]: I20250903 23:27:51.453769 1983 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 3 23:27:51.454433 locksmithd[2025]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 3 23:27:51.474380 update_engine[1983]: E20250903 23:27:51.474298 1983 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 3 23:27:51.474481 update_engine[1983]: I20250903 23:27:51.474424 1983 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 3 23:27:51.474481 update_engine[1983]: I20250903 23:27:51.474445 1983 omaha_request_action.cc:617] Omaha request response: Sep 3 23:27:51.474481 update_engine[1983]: I20250903 23:27:51.474462 1983 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 3 23:27:51.474653 update_engine[1983]: I20250903 23:27:51.474476 1983 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 3 23:27:51.474653 update_engine[1983]: I20250903 23:27:51.474493 1983 update_attempter.cc:306] Processing Done. Sep 3 23:27:51.474653 update_engine[1983]: I20250903 23:27:51.474507 1983 update_attempter.cc:310] Error event sent. Sep 3 23:27:51.474653 update_engine[1983]: I20250903 23:27:51.474531 1983 update_check_scheduler.cc:74] Next update check in 49m27s Sep 3 23:27:51.475151 locksmithd[2025]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 3 23:27:52.746681 systemd[1]: cri-containerd-3c7db443a351dd3c3f008b891e31e73e2efcedc7d4f0adeaae07c9f75ee6f1fc.scope: Deactivated successfully. Sep 3 23:27:52.748879 containerd[2014]: time="2025-09-03T23:27:52.747757047Z" level=info msg="received exit event container_id:\"3c7db443a351dd3c3f008b891e31e73e2efcedc7d4f0adeaae07c9f75ee6f1fc\" id:\"3c7db443a351dd3c3f008b891e31e73e2efcedc7d4f0adeaae07c9f75ee6f1fc\" pid:6582 exit_status:1 exited_at:{seconds:1756942072 nanos:746341383}" Sep 3 23:27:52.750667 containerd[2014]: time="2025-09-03T23:27:52.750609999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c7db443a351dd3c3f008b891e31e73e2efcedc7d4f0adeaae07c9f75ee6f1fc\" id:\"3c7db443a351dd3c3f008b891e31e73e2efcedc7d4f0adeaae07c9f75ee6f1fc\" pid:6582 exit_status:1 exited_at:{seconds:1756942072 nanos:746341383}" Sep 3 23:27:52.791946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c7db443a351dd3c3f008b891e31e73e2efcedc7d4f0adeaae07c9f75ee6f1fc-rootfs.mount: Deactivated successfully. Sep 3 23:27:53.134759 kubelet[3309]: I0903 23:27:53.134623 3309 scope.go:117] "RemoveContainer" containerID="39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f" Sep 3 23:27:53.135452 kubelet[3309]: I0903 23:27:53.135416 3309 scope.go:117] "RemoveContainer" containerID="3c7db443a351dd3c3f008b891e31e73e2efcedc7d4f0adeaae07c9f75ee6f1fc" Sep 3 23:27:53.136660 kubelet[3309]: E0903 23:27:53.135657 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-755d956888-p58l2_tigera-operator(db975dc2-d8ad-46dc-99cb-eb209ea141f3)\"" pod="tigera-operator/tigera-operator-755d956888-p58l2" podUID="db975dc2-d8ad-46dc-99cb-eb209ea141f3" Sep 3 23:27:53.139158 containerd[2014]: time="2025-09-03T23:27:53.139051777Z" level=info msg="RemoveContainer for \"39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f\"" Sep 3 23:27:53.151270 containerd[2014]: time="2025-09-03T23:27:53.151123369Z" level=info msg="RemoveContainer for \"39758c26ec4bb634992cb9b76fdb399ba66db5fe12026e2cf3fa452bb58aee5f\" returns successfully" Sep 3 23:27:54.538629 containerd[2014]: time="2025-09-03T23:27:54.538571320Z" level=info msg="TaskExit event in podsandbox handler container_id:\"691dae3a3f7536d59f269fe32611169838db9ba904e99a37a916e78ff3fd4a48\" id:\"8f2ca70851cffdb9bd20eee76abf3e7fa883001b4f8a576b5451920d9d69a1aa\" pid:6741 exited_at:{seconds:1756942074 nanos:537896368}" Sep 3 23:28:01.142576 kubelet[3309]: E0903 23:28:01.142220 3309 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-232?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"