Sep 16 04:39:03.148352 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 16 04:39:03.148396 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 16 03:05:48 -00 2025 Sep 16 04:39:03.148480 kernel: KASLR disabled due to lack of seed Sep 16 04:39:03.148497 kernel: efi: EFI v2.7 by EDK II Sep 16 04:39:03.148513 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Sep 16 04:39:03.148529 kernel: secureboot: Secure boot disabled Sep 16 04:39:03.148546 kernel: ACPI: Early table checksum verification disabled Sep 16 04:39:03.148561 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 16 04:39:03.148576 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 16 04:39:03.148592 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 16 04:39:03.148607 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 16 04:39:03.148627 kernel: ACPI: FACS 0x0000000078630000 000040 Sep 16 04:39:03.148642 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 16 04:39:03.148657 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 16 04:39:03.148675 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 16 04:39:03.148691 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 16 04:39:03.148712 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 16 04:39:03.148729 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 16 04:39:03.148745 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 16 04:39:03.148761 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 16 04:39:03.148777 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 16 04:39:03.148793 kernel: printk: legacy bootconsole [uart0] enabled Sep 16 04:39:03.148809 kernel: ACPI: Use ACPI SPCR as default console: No Sep 16 04:39:03.148826 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 16 04:39:03.148843 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Sep 16 04:39:03.148859 kernel: Zone ranges: Sep 16 04:39:03.148875 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 16 04:39:03.150473 kernel: DMA32 empty Sep 16 04:39:03.150493 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 16 04:39:03.150509 kernel: Device empty Sep 16 04:39:03.150525 kernel: Movable zone start for each node Sep 16 04:39:03.150541 kernel: Early memory node ranges Sep 16 04:39:03.150557 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 16 04:39:03.150573 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 16 04:39:03.150589 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 16 04:39:03.150605 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 16 04:39:03.150620 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 16 04:39:03.150636 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 16 04:39:03.150651 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 16 04:39:03.150674 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 16 04:39:03.150697 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 16 04:39:03.150714 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 16 04:39:03.150731 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Sep 16 04:39:03.150748 kernel: psci: probing for conduit method from ACPI. Sep 16 04:39:03.150768 kernel: psci: PSCIv1.0 detected in firmware. Sep 16 04:39:03.150785 kernel: psci: Using standard PSCI v0.2 function IDs Sep 16 04:39:03.150802 kernel: psci: Trusted OS migration not required Sep 16 04:39:03.150818 kernel: psci: SMC Calling Convention v1.1 Sep 16 04:39:03.150835 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 16 04:39:03.150852 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 16 04:39:03.150868 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 16 04:39:03.150886 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 16 04:39:03.150902 kernel: Detected PIPT I-cache on CPU0 Sep 16 04:39:03.150919 kernel: CPU features: detected: GIC system register CPU interface Sep 16 04:39:03.150936 kernel: CPU features: detected: Spectre-v2 Sep 16 04:39:03.150956 kernel: CPU features: detected: Spectre-v3a Sep 16 04:39:03.150973 kernel: CPU features: detected: Spectre-BHB Sep 16 04:39:03.150990 kernel: CPU features: detected: ARM erratum 1742098 Sep 16 04:39:03.151006 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 16 04:39:03.151023 kernel: alternatives: applying boot alternatives Sep 16 04:39:03.151041 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=eff5cc3c399cf6fc52e3071751a09276871b099078da6d1b1a498405d04a9313 Sep 16 04:39:03.151060 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:39:03.151077 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 16 04:39:03.151094 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 16 04:39:03.151110 kernel: Fallback order for Node 0: 0 Sep 16 04:39:03.151131 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Sep 16 04:39:03.151148 kernel: Policy zone: Normal Sep 16 04:39:03.151165 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:39:03.151181 kernel: software IO TLB: area num 2. Sep 16 04:39:03.151197 kernel: software IO TLB: mapped [mem 0x000000006c5f0000-0x00000000705f0000] (64MB) Sep 16 04:39:03.151214 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 16 04:39:03.151230 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:39:03.151248 kernel: rcu: RCU event tracing is enabled. Sep 16 04:39:03.151265 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 16 04:39:03.151282 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:39:03.151299 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:39:03.151316 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:39:03.151337 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 16 04:39:03.151354 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:39:03.151371 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:39:03.151388 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 16 04:39:03.151422 kernel: GICv3: 96 SPIs implemented Sep 16 04:39:03.151442 kernel: GICv3: 0 Extended SPIs implemented Sep 16 04:39:03.151459 kernel: Root IRQ handler: gic_handle_irq Sep 16 04:39:03.151475 kernel: GICv3: GICv3 features: 16 PPIs Sep 16 04:39:03.151492 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 16 04:39:03.151509 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 16 04:39:03.151525 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 16 04:39:03.151542 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Sep 16 04:39:03.151565 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Sep 16 04:39:03.151581 kernel: GICv3: using LPI property table @0x0000000400110000 Sep 16 04:39:03.151598 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 16 04:39:03.151615 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Sep 16 04:39:03.151631 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:39:03.151648 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 16 04:39:03.151665 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 16 04:39:03.151682 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 16 04:39:03.151699 kernel: Console: colour dummy device 80x25 Sep 16 04:39:03.151716 kernel: printk: legacy console [tty1] enabled Sep 16 04:39:03.151733 kernel: ACPI: Core revision 20240827 Sep 16 04:39:03.151756 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 16 04:39:03.151773 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:39:03.151790 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:39:03.151808 kernel: landlock: Up and running. Sep 16 04:39:03.151825 kernel: SELinux: Initializing. Sep 16 04:39:03.151842 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:39:03.151859 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:39:03.151876 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:39:03.151894 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:39:03.151916 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:39:03.151933 kernel: Remapping and enabling EFI services. Sep 16 04:39:03.151950 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:39:03.151967 kernel: Detected PIPT I-cache on CPU1 Sep 16 04:39:03.151985 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 16 04:39:03.152002 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Sep 16 04:39:03.152020 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 16 04:39:03.152036 kernel: smp: Brought up 1 node, 2 CPUs Sep 16 04:39:03.152054 kernel: SMP: Total of 2 processors activated. Sep 16 04:39:03.152085 kernel: CPU: All CPU(s) started at EL1 Sep 16 04:39:03.152103 kernel: CPU features: detected: 32-bit EL0 Support Sep 16 04:39:03.152125 kernel: CPU features: detected: 32-bit EL1 Support Sep 16 04:39:03.152155 kernel: CPU features: detected: CRC32 instructions Sep 16 04:39:03.152179 kernel: alternatives: applying system-wide alternatives Sep 16 04:39:03.152198 kernel: Memory: 3797032K/4030464K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38976K init, 1038K bss, 212088K reserved, 16384K cma-reserved) Sep 16 04:39:03.152217 kernel: devtmpfs: initialized Sep 16 04:39:03.152240 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:39:03.152259 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 16 04:39:03.152299 kernel: 17040 pages in range for non-PLT usage Sep 16 04:39:03.152318 kernel: 508560 pages in range for PLT usage Sep 16 04:39:03.152336 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:39:03.152354 kernel: SMBIOS 3.0.0 present. Sep 16 04:39:03.152372 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 16 04:39:03.152389 kernel: DMI: Memory slots populated: 0/0 Sep 16 04:39:03.152444 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:39:03.152471 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 16 04:39:03.152489 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 16 04:39:03.152508 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 16 04:39:03.152526 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:39:03.152544 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Sep 16 04:39:03.152561 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:39:03.152579 kernel: cpuidle: using governor menu Sep 16 04:39:03.152597 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 16 04:39:03.152615 kernel: ASID allocator initialised with 65536 entries Sep 16 04:39:03.152649 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:39:03.152671 kernel: Serial: AMBA PL011 UART driver Sep 16 04:39:03.152689 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:39:03.152707 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:39:03.152726 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 16 04:39:03.152743 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 16 04:39:03.152761 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:39:03.152779 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:39:03.152797 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 16 04:39:03.152820 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 16 04:39:03.152838 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:39:03.152856 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:39:03.152873 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:39:03.152891 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 16 04:39:03.152910 kernel: ACPI: Interpreter enabled Sep 16 04:39:03.152928 kernel: ACPI: Using GIC for interrupt routing Sep 16 04:39:03.152946 kernel: ACPI: MCFG table detected, 1 entries Sep 16 04:39:03.152963 kernel: ACPI: CPU0 has been hot-added Sep 16 04:39:03.152985 kernel: ACPI: CPU1 has been hot-added Sep 16 04:39:03.153004 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 16 04:39:03.153315 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 16 04:39:03.153554 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 16 04:39:03.153750 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 16 04:39:03.153938 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 16 04:39:03.154131 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 16 04:39:03.154165 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 16 04:39:03.154185 kernel: acpiphp: Slot [1] registered Sep 16 04:39:03.154203 kernel: acpiphp: Slot [2] registered Sep 16 04:39:03.154221 kernel: acpiphp: Slot [3] registered Sep 16 04:39:03.154240 kernel: acpiphp: Slot [4] registered Sep 16 04:39:03.154258 kernel: acpiphp: Slot [5] registered Sep 16 04:39:03.154276 kernel: acpiphp: Slot [6] registered Sep 16 04:39:03.154294 kernel: acpiphp: Slot [7] registered Sep 16 04:39:03.154313 kernel: acpiphp: Slot [8] registered Sep 16 04:39:03.154331 kernel: acpiphp: Slot [9] registered Sep 16 04:39:03.154355 kernel: acpiphp: Slot [10] registered Sep 16 04:39:03.154374 kernel: acpiphp: Slot [11] registered Sep 16 04:39:03.154392 kernel: acpiphp: Slot [12] registered Sep 16 04:39:03.162490 kernel: acpiphp: Slot [13] registered Sep 16 04:39:03.162521 kernel: acpiphp: Slot [14] registered Sep 16 04:39:03.162540 kernel: acpiphp: Slot [15] registered Sep 16 04:39:03.162558 kernel: acpiphp: Slot [16] registered Sep 16 04:39:03.162576 kernel: acpiphp: Slot [17] registered Sep 16 04:39:03.162593 kernel: acpiphp: Slot [18] registered Sep 16 04:39:03.162621 kernel: acpiphp: Slot [19] registered Sep 16 04:39:03.162639 kernel: acpiphp: Slot [20] registered Sep 16 04:39:03.162657 kernel: acpiphp: Slot [21] registered Sep 16 04:39:03.162674 kernel: acpiphp: Slot [22] registered Sep 16 04:39:03.162692 kernel: acpiphp: Slot [23] registered Sep 16 04:39:03.162710 kernel: acpiphp: Slot [24] registered Sep 16 04:39:03.162727 kernel: acpiphp: Slot [25] registered Sep 16 04:39:03.162745 kernel: acpiphp: Slot [26] registered Sep 16 04:39:03.162762 kernel: acpiphp: Slot [27] registered Sep 16 04:39:03.162780 kernel: acpiphp: Slot [28] registered Sep 16 04:39:03.162802 kernel: acpiphp: Slot [29] registered Sep 16 04:39:03.162820 kernel: acpiphp: Slot [30] registered Sep 16 04:39:03.162837 kernel: acpiphp: Slot [31] registered Sep 16 04:39:03.162855 kernel: PCI host bridge to bus 0000:00 Sep 16 04:39:03.163110 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 16 04:39:03.163291 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 16 04:39:03.163486 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 16 04:39:03.163659 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 16 04:39:03.165753 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Sep 16 04:39:03.165974 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Sep 16 04:39:03.166166 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Sep 16 04:39:03.166368 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Sep 16 04:39:03.168684 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Sep 16 04:39:03.168895 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 16 04:39:03.169108 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Sep 16 04:39:03.169297 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Sep 16 04:39:03.169514 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Sep 16 04:39:03.169706 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Sep 16 04:39:03.169891 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 16 04:39:03.170077 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Sep 16 04:39:03.170265 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Sep 16 04:39:03.175552 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Sep 16 04:39:03.175779 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Sep 16 04:39:03.175977 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Sep 16 04:39:03.176152 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 16 04:39:03.176346 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 16 04:39:03.176579 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 16 04:39:03.176608 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 16 04:39:03.176637 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 16 04:39:03.176656 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 16 04:39:03.176675 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 16 04:39:03.176693 kernel: iommu: Default domain type: Translated Sep 16 04:39:03.176711 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 16 04:39:03.176730 kernel: efivars: Registered efivars operations Sep 16 04:39:03.176749 kernel: vgaarb: loaded Sep 16 04:39:03.176767 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 16 04:39:03.176785 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:39:03.176808 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:39:03.176826 kernel: pnp: PnP ACPI init Sep 16 04:39:03.177028 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 16 04:39:03.177056 kernel: pnp: PnP ACPI: found 1 devices Sep 16 04:39:03.177075 kernel: NET: Registered PF_INET protocol family Sep 16 04:39:03.177093 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 16 04:39:03.177112 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 16 04:39:03.177130 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:39:03.177154 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 16 04:39:03.177172 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 16 04:39:03.177190 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 16 04:39:03.177208 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:39:03.177226 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:39:03.177244 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:39:03.177262 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:39:03.177280 kernel: kvm [1]: HYP mode not available Sep 16 04:39:03.177298 kernel: Initialise system trusted keyrings Sep 16 04:39:03.177321 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 16 04:39:03.177338 kernel: Key type asymmetric registered Sep 16 04:39:03.177356 kernel: Asymmetric key parser 'x509' registered Sep 16 04:39:03.177374 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 16 04:39:03.177391 kernel: io scheduler mq-deadline registered Sep 16 04:39:03.177437 kernel: io scheduler kyber registered Sep 16 04:39:03.177458 kernel: io scheduler bfq registered Sep 16 04:39:03.177669 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 16 04:39:03.177702 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 16 04:39:03.177721 kernel: ACPI: button: Power Button [PWRB] Sep 16 04:39:03.177740 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 16 04:39:03.177762 kernel: ACPI: button: Sleep Button [SLPB] Sep 16 04:39:03.177780 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:39:03.177801 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 16 04:39:03.178007 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 16 04:39:03.178034 kernel: printk: legacy console [ttyS0] disabled Sep 16 04:39:03.178054 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 16 04:39:03.178079 kernel: printk: legacy console [ttyS0] enabled Sep 16 04:39:03.178099 kernel: printk: legacy bootconsole [uart0] disabled Sep 16 04:39:03.178117 kernel: thunder_xcv, ver 1.0 Sep 16 04:39:03.178136 kernel: thunder_bgx, ver 1.0 Sep 16 04:39:03.178155 kernel: nicpf, ver 1.0 Sep 16 04:39:03.178173 kernel: nicvf, ver 1.0 Sep 16 04:39:03.178382 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 16 04:39:03.178636 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-16T04:39:02 UTC (1757997542) Sep 16 04:39:03.178670 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 16 04:39:03.178689 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Sep 16 04:39:03.178707 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:39:03.178724 kernel: watchdog: NMI not fully supported Sep 16 04:39:03.178742 kernel: watchdog: Hard watchdog permanently disabled Sep 16 04:39:03.178760 kernel: Segment Routing with IPv6 Sep 16 04:39:03.178778 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:39:03.178796 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:39:03.178813 kernel: Key type dns_resolver registered Sep 16 04:39:03.178836 kernel: registered taskstats version 1 Sep 16 04:39:03.178854 kernel: Loading compiled-in X.509 certificates Sep 16 04:39:03.178871 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 99eb88579c3d58869b2224a85ec8efa5647af805' Sep 16 04:39:03.178889 kernel: Demotion targets for Node 0: null Sep 16 04:39:03.178907 kernel: Key type .fscrypt registered Sep 16 04:39:03.178924 kernel: Key type fscrypt-provisioning registered Sep 16 04:39:03.178942 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 04:39:03.178959 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:39:03.178977 kernel: ima: No architecture policies found Sep 16 04:39:03.178999 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 16 04:39:03.179017 kernel: clk: Disabling unused clocks Sep 16 04:39:03.179035 kernel: PM: genpd: Disabling unused power domains Sep 16 04:39:03.179053 kernel: Warning: unable to open an initial console. Sep 16 04:39:03.179071 kernel: Freeing unused kernel memory: 38976K Sep 16 04:39:03.179089 kernel: Run /init as init process Sep 16 04:39:03.179106 kernel: with arguments: Sep 16 04:39:03.179124 kernel: /init Sep 16 04:39:03.179141 kernel: with environment: Sep 16 04:39:03.179159 kernel: HOME=/ Sep 16 04:39:03.179181 kernel: TERM=linux Sep 16 04:39:03.179198 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:39:03.179218 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:39:03.179242 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:39:03.179262 systemd[1]: Detected virtualization amazon. Sep 16 04:39:03.179281 systemd[1]: Detected architecture arm64. Sep 16 04:39:03.179299 systemd[1]: Running in initrd. Sep 16 04:39:03.179322 systemd[1]: No hostname configured, using default hostname. Sep 16 04:39:03.179342 systemd[1]: Hostname set to . Sep 16 04:39:03.179361 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:39:03.179380 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:39:03.179455 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:39:03.179481 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:39:03.179503 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:39:03.179523 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:39:03.179550 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:39:03.179571 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:39:03.179593 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:39:03.179614 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:39:03.179634 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:39:03.179654 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:39:03.179674 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:39:03.179698 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:39:03.179718 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:39:03.179737 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:39:03.179756 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:39:03.179776 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:39:03.179796 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:39:03.179816 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:39:03.179836 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:39:03.179860 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:39:03.179879 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:39:03.179898 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:39:03.179917 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:39:03.179937 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:39:03.179957 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:39:03.179977 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:39:03.179996 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:39:03.180016 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:39:03.180041 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:39:03.180060 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:39:03.180080 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:39:03.180100 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:39:03.180124 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:39:03.180144 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:39:03.180164 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:39:03.180183 kernel: Bridge firewalling registered Sep 16 04:39:03.180280 systemd-journald[258]: Collecting audit messages is disabled. Sep 16 04:39:03.180345 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:39:03.180371 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:39:03.180392 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:39:03.181169 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:39:03.181199 systemd-journald[258]: Journal started Sep 16 04:39:03.181246 systemd-journald[258]: Runtime Journal (/run/log/journal/ec204434f78f261e3fd6848038f5e57f) is 8M, max 75.3M, 67.3M free. Sep 16 04:39:03.092932 systemd-modules-load[259]: Inserted module 'overlay' Sep 16 04:39:03.136287 systemd-modules-load[259]: Inserted module 'br_netfilter' Sep 16 04:39:03.187437 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:39:03.189855 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:39:03.200053 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:39:03.211712 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:39:03.221125 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:39:03.233653 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:39:03.243305 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:39:03.264376 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:39:03.279830 systemd-tmpfiles[288]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:39:03.288090 dracut-cmdline[296]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=eff5cc3c399cf6fc52e3071751a09276871b099078da6d1b1a498405d04a9313 Sep 16 04:39:03.303128 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:39:03.312615 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:39:03.396840 systemd-resolved[314]: Positive Trust Anchors: Sep 16 04:39:03.396875 systemd-resolved[314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:39:03.396937 systemd-resolved[314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:39:03.470438 kernel: SCSI subsystem initialized Sep 16 04:39:03.478434 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:39:03.491439 kernel: iscsi: registered transport (tcp) Sep 16 04:39:03.512870 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:39:03.512965 kernel: QLogic iSCSI HBA Driver Sep 16 04:39:03.547478 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:39:03.573492 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:39:03.582606 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:39:03.670437 kernel: random: crng init done Sep 16 04:39:03.670686 systemd-resolved[314]: Defaulting to hostname 'linux'. Sep 16 04:39:03.674375 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:39:03.679240 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:39:03.701966 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:39:03.708763 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:39:03.803448 kernel: raid6: neonx8 gen() 6410 MB/s Sep 16 04:39:03.820436 kernel: raid6: neonx4 gen() 6394 MB/s Sep 16 04:39:03.837435 kernel: raid6: neonx2 gen() 5333 MB/s Sep 16 04:39:03.854436 kernel: raid6: neonx1 gen() 3911 MB/s Sep 16 04:39:03.871435 kernel: raid6: int64x8 gen() 3615 MB/s Sep 16 04:39:03.888436 kernel: raid6: int64x4 gen() 3673 MB/s Sep 16 04:39:03.905436 kernel: raid6: int64x2 gen() 3540 MB/s Sep 16 04:39:03.923453 kernel: raid6: int64x1 gen() 2755 MB/s Sep 16 04:39:03.923491 kernel: raid6: using algorithm neonx8 gen() 6410 MB/s Sep 16 04:39:03.942437 kernel: raid6: .... xor() 4746 MB/s, rmw enabled Sep 16 04:39:03.942475 kernel: raid6: using neon recovery algorithm Sep 16 04:39:03.951016 kernel: xor: measuring software checksum speed Sep 16 04:39:03.951067 kernel: 8regs : 12943 MB/sec Sep 16 04:39:03.952215 kernel: 32regs : 13041 MB/sec Sep 16 04:39:03.954592 kernel: arm64_neon : 8072 MB/sec Sep 16 04:39:03.954634 kernel: xor: using function: 32regs (13041 MB/sec) Sep 16 04:39:04.045451 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:39:04.056610 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:39:04.063099 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:39:04.110574 systemd-udevd[507]: Using default interface naming scheme 'v255'. Sep 16 04:39:04.122532 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:39:04.135303 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:39:04.179426 dracut-pre-trigger[515]: rd.md=0: removing MD RAID activation Sep 16 04:39:04.221513 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:39:04.229983 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:39:04.358246 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:39:04.367609 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:39:04.514109 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 16 04:39:04.514189 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 16 04:39:04.523249 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 16 04:39:04.523683 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 16 04:39:04.537597 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:74:59:cf:a3:1b Sep 16 04:39:04.537899 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 16 04:39:04.537937 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 16 04:39:04.551445 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 16 04:39:04.560306 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 04:39:04.560367 kernel: GPT:9289727 != 16777215 Sep 16 04:39:04.560431 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 04:39:04.560461 kernel: GPT:9289727 != 16777215 Sep 16 04:39:04.560485 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 04:39:04.560520 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:39:04.560145 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:39:04.560275 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:39:04.563845 (udev-worker)[576]: Network interface NamePolicy= disabled on kernel command line. Sep 16 04:39:04.571880 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:39:04.580920 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:39:04.587994 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:39:04.616594 kernel: nvme nvme0: using unchecked data buffer Sep 16 04:39:04.638628 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:39:04.732172 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 16 04:39:04.770507 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 16 04:39:04.773390 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 16 04:39:04.845076 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 16 04:39:04.851086 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:39:04.876724 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 16 04:39:04.880584 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:39:04.888232 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:39:04.893732 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:39:04.900640 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:39:04.908062 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:39:04.942746 disk-uuid[687]: Primary Header is updated. Sep 16 04:39:04.942746 disk-uuid[687]: Secondary Entries is updated. Sep 16 04:39:04.942746 disk-uuid[687]: Secondary Header is updated. Sep 16 04:39:04.954219 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:39:04.954355 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:39:05.980431 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:39:05.982598 disk-uuid[690]: The operation has completed successfully. Sep 16 04:39:06.148837 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:39:06.150808 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:39:06.252941 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:39:06.290636 sh[955]: Success Sep 16 04:39:06.318762 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:39:06.318836 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:39:06.320874 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:39:06.332503 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 16 04:39:06.437789 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:39:06.443673 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:39:06.483900 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:39:06.505455 kernel: BTRFS: device fsid 782b6948-7aaa-439e-9946-c8fdb4d8f287 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (978) Sep 16 04:39:06.509860 kernel: BTRFS info (device dm-0): first mount of filesystem 782b6948-7aaa-439e-9946-c8fdb4d8f287 Sep 16 04:39:06.509989 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:39:06.571107 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 16 04:39:06.571165 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:39:06.572431 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:39:06.610696 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:39:06.611515 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:39:06.617178 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:39:06.618371 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:39:06.632898 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:39:06.682472 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1009) Sep 16 04:39:06.687813 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:39:06.687884 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:39:06.705481 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 04:39:06.705593 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 04:39:06.712491 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:39:06.714378 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:39:06.721841 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:39:06.811473 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:39:06.821301 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:39:06.913687 systemd-networkd[1148]: lo: Link UP Sep 16 04:39:06.915662 systemd-networkd[1148]: lo: Gained carrier Sep 16 04:39:06.919712 systemd-networkd[1148]: Enumeration completed Sep 16 04:39:06.919883 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:39:06.923149 systemd[1]: Reached target network.target - Network. Sep 16 04:39:06.924003 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:39:06.924011 systemd-networkd[1148]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:39:06.929715 systemd-networkd[1148]: eth0: Link UP Sep 16 04:39:06.929723 systemd-networkd[1148]: eth0: Gained carrier Sep 16 04:39:06.929744 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:39:06.959911 systemd-networkd[1148]: eth0: DHCPv4 address 172.31.31.59/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 16 04:39:07.031653 ignition[1082]: Ignition 2.22.0 Sep 16 04:39:07.031673 ignition[1082]: Stage: fetch-offline Sep 16 04:39:07.032978 ignition[1082]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:39:07.033006 ignition[1082]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:39:07.033538 ignition[1082]: Ignition finished successfully Sep 16 04:39:07.041330 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:39:07.048263 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 16 04:39:07.095501 ignition[1162]: Ignition 2.22.0 Sep 16 04:39:07.095530 ignition[1162]: Stage: fetch Sep 16 04:39:07.096015 ignition[1162]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:39:07.096039 ignition[1162]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:39:07.096168 ignition[1162]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:39:07.112614 ignition[1162]: PUT result: OK Sep 16 04:39:07.115568 ignition[1162]: parsed url from cmdline: "" Sep 16 04:39:07.115583 ignition[1162]: no config URL provided Sep 16 04:39:07.115819 ignition[1162]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:39:07.115846 ignition[1162]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:39:07.115919 ignition[1162]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:39:07.124661 ignition[1162]: PUT result: OK Sep 16 04:39:07.124930 ignition[1162]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 16 04:39:07.127009 ignition[1162]: GET result: OK Sep 16 04:39:07.127171 ignition[1162]: parsing config with SHA512: ae2d0a3f3e9d3d15854bd86e4343893238cc6517427290fef783c27be5bb32fe0f777b460054d172b138eeb598e2e59b6845042815372cdaf1ce7c9fe9f2713a Sep 16 04:39:07.140852 unknown[1162]: fetched base config from "system" Sep 16 04:39:07.140880 unknown[1162]: fetched base config from "system" Sep 16 04:39:07.140894 unknown[1162]: fetched user config from "aws" Sep 16 04:39:07.144979 ignition[1162]: fetch: fetch complete Sep 16 04:39:07.144993 ignition[1162]: fetch: fetch passed Sep 16 04:39:07.145086 ignition[1162]: Ignition finished successfully Sep 16 04:39:07.155167 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 16 04:39:07.159675 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:39:07.213295 ignition[1169]: Ignition 2.22.0 Sep 16 04:39:07.213854 ignition[1169]: Stage: kargs Sep 16 04:39:07.214438 ignition[1169]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:39:07.214462 ignition[1169]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:39:07.214613 ignition[1169]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:39:07.224471 ignition[1169]: PUT result: OK Sep 16 04:39:07.228850 ignition[1169]: kargs: kargs passed Sep 16 04:39:07.228964 ignition[1169]: Ignition finished successfully Sep 16 04:39:07.233486 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:39:07.239960 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:39:07.278944 ignition[1175]: Ignition 2.22.0 Sep 16 04:39:07.278976 ignition[1175]: Stage: disks Sep 16 04:39:07.279575 ignition[1175]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:39:07.279599 ignition[1175]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:39:07.279740 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:39:07.282319 ignition[1175]: PUT result: OK Sep 16 04:39:07.292930 ignition[1175]: disks: disks passed Sep 16 04:39:07.293034 ignition[1175]: Ignition finished successfully Sep 16 04:39:07.297225 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:39:07.302611 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:39:07.305369 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:39:07.310539 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:39:07.315266 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:39:07.317685 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:39:07.329619 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:39:07.402811 systemd-fsck[1183]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 16 04:39:07.408766 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:39:07.416476 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:39:07.552435 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a00d22d9-68b1-4a84-acfc-9fae1fca53dd r/w with ordered data mode. Quota mode: none. Sep 16 04:39:07.554007 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:39:07.557496 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:39:07.562543 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:39:07.580947 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:39:07.585524 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 16 04:39:07.591451 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:39:07.591517 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:39:07.612559 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1202) Sep 16 04:39:07.616772 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:39:07.616853 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:39:07.618803 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:39:07.625004 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:39:07.634727 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 04:39:07.634802 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 04:39:07.638355 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:39:07.729599 initrd-setup-root[1226]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:39:07.739603 initrd-setup-root[1233]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:39:07.750067 initrd-setup-root[1240]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:39:07.760780 initrd-setup-root[1247]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:39:07.906066 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:39:07.913241 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:39:07.919658 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:39:07.945589 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:39:07.948683 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:39:07.979843 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:39:08.001031 ignition[1314]: INFO : Ignition 2.22.0 Sep 16 04:39:08.001031 ignition[1314]: INFO : Stage: mount Sep 16 04:39:08.004685 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:39:08.004685 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:39:08.009696 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:39:08.013092 ignition[1314]: INFO : PUT result: OK Sep 16 04:39:08.017168 ignition[1314]: INFO : mount: mount passed Sep 16 04:39:08.018901 ignition[1314]: INFO : Ignition finished successfully Sep 16 04:39:08.026933 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:39:08.037550 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:39:08.486592 systemd-networkd[1148]: eth0: Gained IPv6LL Sep 16 04:39:08.556853 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:39:08.604457 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1328) Sep 16 04:39:08.609990 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:39:08.610112 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:39:08.617968 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 04:39:08.618039 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 04:39:08.621440 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:39:08.670919 ignition[1345]: INFO : Ignition 2.22.0 Sep 16 04:39:08.670919 ignition[1345]: INFO : Stage: files Sep 16 04:39:08.675382 ignition[1345]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:39:08.675382 ignition[1345]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:39:08.675382 ignition[1345]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:39:08.688380 ignition[1345]: INFO : PUT result: OK Sep 16 04:39:08.691596 ignition[1345]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:39:08.695039 ignition[1345]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:39:08.695039 ignition[1345]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:39:08.704662 ignition[1345]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:39:08.707767 ignition[1345]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:39:08.712971 unknown[1345]: wrote ssh authorized keys file for user: core Sep 16 04:39:08.715437 ignition[1345]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:39:08.719347 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 16 04:39:08.726620 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 16 04:39:08.844590 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:39:09.076805 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 16 04:39:09.081173 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:39:09.085093 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:39:09.085093 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:39:09.092846 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:39:09.092846 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:39:09.100583 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:39:09.100583 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:39:09.100583 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:39:09.117289 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:39:09.121365 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:39:09.121365 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 16 04:39:09.130956 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 16 04:39:09.130956 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 16 04:39:09.130956 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 16 04:39:09.653484 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 16 04:39:10.030149 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 16 04:39:10.030149 ignition[1345]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 16 04:39:10.037750 ignition[1345]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:39:10.041909 ignition[1345]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:39:10.041909 ignition[1345]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 16 04:39:10.041909 ignition[1345]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 16 04:39:10.041909 ignition[1345]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 04:39:10.041909 ignition[1345]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:39:10.041909 ignition[1345]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:39:10.041909 ignition[1345]: INFO : files: files passed Sep 16 04:39:10.041909 ignition[1345]: INFO : Ignition finished successfully Sep 16 04:39:10.069216 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 04:39:10.074609 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 04:39:10.083372 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 04:39:10.102876 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 04:39:10.106513 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 04:39:10.117932 initrd-setup-root-after-ignition[1374]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:39:10.117932 initrd-setup-root-after-ignition[1374]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:39:10.125145 initrd-setup-root-after-ignition[1378]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:39:10.132469 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:39:10.139612 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 04:39:10.144572 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 04:39:10.254581 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 04:39:10.256928 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 04:39:10.263040 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 04:39:10.265502 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 04:39:10.268069 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 04:39:10.269447 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 04:39:10.310887 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:39:10.317607 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 04:39:10.354372 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:39:10.359512 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:39:10.364732 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 04:39:10.367022 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 04:39:10.367321 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:39:10.374187 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 04:39:10.377070 systemd[1]: Stopped target basic.target - Basic System. Sep 16 04:39:10.384312 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 04:39:10.387179 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:39:10.399732 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 04:39:10.402483 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:39:10.405491 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 04:39:10.413330 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:39:10.420756 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 04:39:10.425827 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 04:39:10.428262 systemd[1]: Stopped target swap.target - Swaps. Sep 16 04:39:10.433957 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 04:39:10.434192 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:39:10.441866 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:39:10.446610 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:39:10.450665 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 04:39:10.454589 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:39:10.457441 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 04:39:10.457670 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 04:39:10.467498 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 04:39:10.467921 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:39:10.476462 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 04:39:10.476673 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 04:39:10.482200 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 04:39:10.492513 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 04:39:10.494696 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 04:39:10.497366 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:39:10.506442 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 04:39:10.506948 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:39:10.525223 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 04:39:10.526454 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 04:39:10.556464 ignition[1398]: INFO : Ignition 2.22.0 Sep 16 04:39:10.556464 ignition[1398]: INFO : Stage: umount Sep 16 04:39:10.557825 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 04:39:10.562798 ignition[1398]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:39:10.562798 ignition[1398]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:39:10.562798 ignition[1398]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:39:10.570865 ignition[1398]: INFO : PUT result: OK Sep 16 04:39:10.577624 ignition[1398]: INFO : umount: umount passed Sep 16 04:39:10.579518 ignition[1398]: INFO : Ignition finished successfully Sep 16 04:39:10.586630 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 04:39:10.588826 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 04:39:10.594265 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 04:39:10.594490 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 04:39:10.600997 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 04:39:10.601100 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 04:39:10.607313 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 16 04:39:10.607417 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 16 04:39:10.610462 systemd[1]: Stopped target network.target - Network. Sep 16 04:39:10.618593 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 04:39:10.618711 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:39:10.621564 systemd[1]: Stopped target paths.target - Path Units. Sep 16 04:39:10.625661 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 04:39:10.628040 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:39:10.630931 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 04:39:10.633065 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 04:39:10.635057 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 04:39:10.635138 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:39:10.647703 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 04:39:10.647783 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:39:10.653874 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 04:39:10.653985 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 04:39:10.656537 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 04:39:10.656617 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 04:39:10.664474 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 04:39:10.677737 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 04:39:10.688526 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 04:39:10.689208 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 04:39:10.700763 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 04:39:10.701226 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 04:39:10.701478 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 04:39:10.723839 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 04:39:10.729121 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 04:39:10.737785 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 04:39:10.737881 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:39:10.745129 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 04:39:10.751541 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 04:39:10.751677 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:39:10.754747 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:39:10.754852 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:39:10.760580 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 04:39:10.760673 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 04:39:10.773345 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 04:39:10.773462 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:39:10.779154 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:39:10.795122 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:39:10.798723 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:39:10.803197 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 04:39:10.803649 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 04:39:10.812168 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 04:39:10.812375 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 04:39:10.831509 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 04:39:10.832681 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:39:10.842712 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 04:39:10.842800 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 04:39:10.845472 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 04:39:10.845533 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:39:10.853202 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 04:39:10.853307 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:39:10.861785 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 04:39:10.861882 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 04:39:10.868445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 04:39:10.868535 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:39:10.877026 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 04:39:10.880971 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 04:39:10.884939 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:39:10.895857 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 04:39:10.895959 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:39:10.904019 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 16 04:39:10.904118 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:39:10.910741 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 04:39:10.910841 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:39:10.919572 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:39:10.919674 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:39:10.928866 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 16 04:39:10.928973 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 16 04:39:10.929053 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 16 04:39:10.929137 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:39:10.931843 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 04:39:10.933073 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 04:39:10.941341 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 04:39:10.943455 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 04:39:10.951146 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 04:39:10.958013 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 04:39:10.995739 systemd[1]: Switching root. Sep 16 04:39:11.033226 systemd-journald[258]: Journal stopped Sep 16 04:39:13.082944 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Sep 16 04:39:13.083059 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 04:39:13.083106 kernel: SELinux: policy capability open_perms=1 Sep 16 04:39:13.083136 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 04:39:13.083165 kernel: SELinux: policy capability always_check_network=0 Sep 16 04:39:13.083192 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 04:39:13.083222 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 04:39:13.083254 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 04:39:13.083287 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 04:39:13.083324 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 04:39:13.083353 kernel: audit: type=1403 audit(1757997551.312:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 04:39:13.083389 systemd[1]: Successfully loaded SELinux policy in 82.585ms. Sep 16 04:39:13.083453 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.787ms. Sep 16 04:39:13.083489 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:39:13.083518 systemd[1]: Detected virtualization amazon. Sep 16 04:39:13.083548 systemd[1]: Detected architecture arm64. Sep 16 04:39:13.083576 systemd[1]: Detected first boot. Sep 16 04:39:13.083610 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:39:13.083640 zram_generator::config[1442]: No configuration found. Sep 16 04:39:13.083671 kernel: NET: Registered PF_VSOCK protocol family Sep 16 04:39:13.083701 systemd[1]: Populated /etc with preset unit settings. Sep 16 04:39:13.083729 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 04:39:13.083760 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 04:39:13.083791 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 04:39:13.083827 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 04:39:13.083859 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 04:39:13.083890 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 04:39:13.083920 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 04:39:13.083953 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 04:39:13.083983 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 04:39:13.084011 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 04:39:13.084042 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 04:39:13.084069 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 04:39:13.084096 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:39:13.084128 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:39:13.084158 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 04:39:13.084187 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 04:39:13.084236 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 04:39:13.084270 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:39:13.084300 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 16 04:39:13.084331 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:39:13.084365 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:39:13.084393 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 04:39:13.085485 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 04:39:13.085525 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 04:39:13.085554 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 04:39:13.085584 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:39:13.085613 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:39:13.085643 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:39:13.085679 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:39:13.085707 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 04:39:13.085742 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 04:39:13.085770 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 04:39:13.085797 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:39:13.085827 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:39:13.085866 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:39:13.085894 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 04:39:13.085924 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 04:39:13.085953 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 04:39:13.085984 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 04:39:13.086017 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 04:39:13.086049 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 04:39:13.086080 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 04:39:13.086109 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 04:39:13.086138 systemd[1]: Reached target machines.target - Containers. Sep 16 04:39:13.086166 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 04:39:13.086194 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:39:13.086223 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:39:13.086255 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 04:39:13.086285 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:39:13.086315 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:39:13.086343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:39:13.086371 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 04:39:13.088431 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:39:13.088498 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 04:39:13.088529 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 04:39:13.088567 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 04:39:13.088599 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 04:39:13.088628 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 04:39:13.088663 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:39:13.088694 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:39:13.088722 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:39:13.088751 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:39:13.088785 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 04:39:13.088815 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 04:39:13.088842 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:39:13.088876 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 04:39:13.088907 systemd[1]: Stopped verity-setup.service. Sep 16 04:39:13.088939 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 04:39:13.088966 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 04:39:13.088996 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 04:39:13.089039 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 04:39:13.089069 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 04:39:13.089097 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 04:39:13.089127 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:39:13.089154 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 04:39:13.089187 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 04:39:13.089215 kernel: loop: module loaded Sep 16 04:39:13.089242 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 04:39:13.089272 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:39:13.089301 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:39:13.089329 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:39:13.089359 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:39:13.089386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:39:13.089445 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 04:39:13.089477 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:39:13.089505 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:39:13.089533 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:39:13.089564 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:39:13.089595 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:39:13.089623 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 04:39:13.089651 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 04:39:13.089683 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:39:13.089716 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 04:39:13.089744 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 04:39:13.089772 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:39:13.089799 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 04:39:13.089828 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:39:13.089856 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 04:39:13.089889 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 04:39:13.089917 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:39:13.089945 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:39:13.089976 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 04:39:13.090052 systemd-journald[1518]: Collecting audit messages is disabled. Sep 16 04:39:13.090099 systemd-journald[1518]: Journal started Sep 16 04:39:13.090148 systemd-journald[1518]: Runtime Journal (/run/log/journal/ec204434f78f261e3fd6848038f5e57f) is 8M, max 75.3M, 67.3M free. Sep 16 04:39:12.329389 systemd[1]: Queued start job for default target multi-user.target. Sep 16 04:39:12.352142 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 16 04:39:12.352986 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 04:39:13.099824 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:39:13.106816 systemd-tmpfiles[1529]: ACLs are not supported, ignoring. Sep 16 04:39:13.106842 systemd-tmpfiles[1529]: ACLs are not supported, ignoring. Sep 16 04:39:13.144792 kernel: loop0: detected capacity change from 0 to 119368 Sep 16 04:39:13.144856 kernel: ACPI: bus type drm_connector registered Sep 16 04:39:13.129970 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 04:39:13.133655 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:39:13.157677 kernel: fuse: init (API version 7.41) Sep 16 04:39:13.163341 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:39:13.170506 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:39:13.173672 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 04:39:13.174511 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 04:39:13.177733 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 04:39:13.185306 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 04:39:13.192807 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 04:39:13.204518 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:39:13.227639 systemd-journald[1518]: Time spent on flushing to /var/log/journal/ec204434f78f261e3fd6848038f5e57f is 88.416ms for 939 entries. Sep 16 04:39:13.227639 systemd-journald[1518]: System Journal (/var/log/journal/ec204434f78f261e3fd6848038f5e57f) is 8M, max 195.6M, 187.6M free. Sep 16 04:39:13.331507 systemd-journald[1518]: Received client request to flush runtime journal. Sep 16 04:39:13.331769 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 04:39:13.331825 kernel: loop1: detected capacity change from 0 to 207008 Sep 16 04:39:13.303527 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 04:39:13.313239 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 04:39:13.338879 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 04:39:13.362396 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 04:39:13.371859 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 04:39:13.380155 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:39:13.384372 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 04:39:13.407854 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 04:39:13.427508 kernel: loop2: detected capacity change from 0 to 100632 Sep 16 04:39:13.461181 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 04:39:13.471679 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:39:13.506880 kernel: loop3: detected capacity change from 0 to 61264 Sep 16 04:39:13.536152 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Sep 16 04:39:13.536195 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Sep 16 04:39:13.547132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:39:13.658444 kernel: loop4: detected capacity change from 0 to 119368 Sep 16 04:39:13.686445 kernel: loop5: detected capacity change from 0 to 207008 Sep 16 04:39:13.729455 kernel: loop6: detected capacity change from 0 to 100632 Sep 16 04:39:13.762442 kernel: loop7: detected capacity change from 0 to 61264 Sep 16 04:39:13.790288 (sd-merge)[1603]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 16 04:39:13.791477 (sd-merge)[1603]: Merged extensions into '/usr'. Sep 16 04:39:13.803992 systemd[1]: Reload requested from client PID 1544 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 04:39:13.804034 systemd[1]: Reloading... Sep 16 04:39:13.987448 zram_generator::config[1632]: No configuration found. Sep 16 04:39:14.020484 ldconfig[1541]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 04:39:14.438912 systemd[1]: Reloading finished in 633 ms. Sep 16 04:39:14.460992 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 04:39:14.464165 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 04:39:14.467457 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 04:39:14.481550 systemd[1]: Starting ensure-sysext.service... Sep 16 04:39:14.490646 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:39:14.502922 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:39:14.545561 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 04:39:14.546540 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 04:39:14.548016 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 04:39:14.548579 systemd[1]: Reload requested from client PID 1682 ('systemctl') (unit ensure-sysext.service)... Sep 16 04:39:14.548716 systemd[1]: Reloading... Sep 16 04:39:14.551020 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 04:39:14.553866 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 04:39:14.557746 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Sep 16 04:39:14.557996 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Sep 16 04:39:14.580700 systemd-udevd[1684]: Using default interface naming scheme 'v255'. Sep 16 04:39:14.585767 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:39:14.585794 systemd-tmpfiles[1683]: Skipping /boot Sep 16 04:39:14.621659 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:39:14.621688 systemd-tmpfiles[1683]: Skipping /boot Sep 16 04:39:14.810447 zram_generator::config[1743]: No configuration found. Sep 16 04:39:14.907176 (udev-worker)[1720]: Network interface NamePolicy= disabled on kernel command line. Sep 16 04:39:15.395939 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 16 04:39:15.398638 systemd[1]: Reloading finished in 849 ms. Sep 16 04:39:15.416126 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:39:15.421245 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:39:15.514685 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:39:15.521702 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 04:39:15.529224 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 04:39:15.537202 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:39:15.543647 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:39:15.548684 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 04:39:15.561274 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:39:15.565921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:39:15.627812 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:39:15.634962 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:39:15.637524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:39:15.637761 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:39:15.651970 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 16 04:39:15.693911 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 04:39:15.702318 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 04:39:15.719833 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:39:15.720292 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:39:15.720507 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:39:15.728045 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 04:39:15.732648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:39:15.733622 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:39:15.751026 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:39:15.754236 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:39:15.766535 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:39:15.769123 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:39:15.769357 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:39:15.769721 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 04:39:15.774046 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:39:15.775555 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:39:15.785764 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:39:15.786248 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:39:15.789560 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:39:15.800942 systemd[1]: Finished ensure-sysext.service. Sep 16 04:39:15.826718 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:39:15.827110 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:39:15.828344 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:39:15.838377 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:39:15.838849 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:39:15.856167 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 04:39:15.865848 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 04:39:15.894173 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 04:39:15.907392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:39:15.960747 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 04:39:15.964954 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:39:15.973588 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 04:39:16.002696 augenrules[1946]: No rules Sep 16 04:39:16.006896 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:39:16.007330 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:39:16.094298 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 04:39:16.107235 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:39:16.215687 systemd-networkd[1898]: lo: Link UP Sep 16 04:39:16.216245 systemd-networkd[1898]: lo: Gained carrier Sep 16 04:39:16.219172 systemd-networkd[1898]: Enumeration completed Sep 16 04:39:16.219545 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:39:16.225234 systemd-networkd[1898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:39:16.225387 systemd-networkd[1898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:39:16.225803 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 04:39:16.232385 systemd-resolved[1899]: Positive Trust Anchors: Sep 16 04:39:16.232749 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 04:39:16.235387 systemd-resolved[1899]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:39:16.235473 systemd-resolved[1899]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:39:16.237909 systemd-networkd[1898]: eth0: Link UP Sep 16 04:39:16.238445 systemd-networkd[1898]: eth0: Gained carrier Sep 16 04:39:16.238488 systemd-networkd[1898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:39:16.250539 systemd-networkd[1898]: eth0: DHCPv4 address 172.31.31.59/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 16 04:39:16.256247 systemd-resolved[1899]: Defaulting to hostname 'linux'. Sep 16 04:39:16.261324 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:39:16.268486 systemd[1]: Reached target network.target - Network. Sep 16 04:39:16.270791 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:39:16.274165 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:39:16.280840 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 04:39:16.283800 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 04:39:16.287234 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 04:39:16.289930 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 04:39:16.292923 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 04:39:16.295707 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 04:39:16.295760 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:39:16.297821 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:39:16.301626 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 04:39:16.306381 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 04:39:16.313746 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 04:39:16.316886 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 04:39:16.319798 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 04:39:16.331486 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 04:39:16.334242 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 04:39:16.338394 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 04:39:16.342615 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 04:39:16.346078 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:39:16.348610 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:39:16.350784 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:39:16.350848 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:39:16.353010 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 04:39:16.361674 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 16 04:39:16.370889 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 04:39:16.379683 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 04:39:16.388637 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 04:39:16.399754 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 04:39:16.402109 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 04:39:16.413841 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 04:39:16.422821 systemd[1]: Started ntpd.service - Network Time Service. Sep 16 04:39:16.429121 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 04:39:16.445774 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 16 04:39:16.452599 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 04:39:16.461824 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 04:39:16.471156 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 04:39:16.478150 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 04:39:16.480119 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 04:39:16.484300 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 04:39:16.497032 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 04:39:16.505525 jq[1972]: false Sep 16 04:39:16.513433 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 04:39:16.516923 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 04:39:16.517364 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 04:39:16.562780 extend-filesystems[1973]: Found /dev/nvme0n1p6 Sep 16 04:39:16.588557 tar[1989]: linux-arm64/LICENSE Sep 16 04:39:16.588557 tar[1989]: linux-arm64/helm Sep 16 04:39:16.598952 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 04:39:16.602311 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 04:39:16.611782 ntpd[1976]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:29 UTC 2025 (1): Starting Sep 16 04:39:16.612792 jq[1984]: true Sep 16 04:39:16.616279 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:29 UTC 2025 (1): Starting Sep 16 04:39:16.616279 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 04:39:16.616279 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: ---------------------------------------------------- Sep 16 04:39:16.616279 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: ntp-4 is maintained by Network Time Foundation, Sep 16 04:39:16.616279 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 04:39:16.616279 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: corporation. Support and training for ntp-4 are Sep 16 04:39:16.616279 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: available at https://www.nwtime.org/support Sep 16 04:39:16.616279 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: ---------------------------------------------------- Sep 16 04:39:16.614592 ntpd[1976]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 04:39:16.614612 ntpd[1976]: ---------------------------------------------------- Sep 16 04:39:16.614629 ntpd[1976]: ntp-4 is maintained by Network Time Foundation, Sep 16 04:39:16.614646 ntpd[1976]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 04:39:16.614662 ntpd[1976]: corporation. Support and training for ntp-4 are Sep 16 04:39:16.614679 ntpd[1976]: available at https://www.nwtime.org/support Sep 16 04:39:16.614695 ntpd[1976]: ---------------------------------------------------- Sep 16 04:39:16.625546 ntpd[1976]: proto: precision = 0.096 usec (-23) Sep 16 04:39:16.626190 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: proto: precision = 0.096 usec (-23) Sep 16 04:39:16.627053 ntpd[1976]: basedate set to 2025-09-04 Sep 16 04:39:16.628087 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: basedate set to 2025-09-04 Sep 16 04:39:16.628087 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: gps base set to 2025-09-07 (week 2383) Sep 16 04:39:16.628087 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 04:39:16.628087 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 04:39:16.627089 ntpd[1976]: gps base set to 2025-09-07 (week 2383) Sep 16 04:39:16.627263 ntpd[1976]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 04:39:16.627310 ntpd[1976]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 04:39:16.629693 ntpd[1976]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 04:39:16.631340 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 04:39:16.631340 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: Listen normally on 3 eth0 172.31.31.59:123 Sep 16 04:39:16.631340 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: Listen normally on 4 lo [::1]:123 Sep 16 04:39:16.631340 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: bind(21) AF_INET6 [fe80::474:59ff:fecf:a31b%2]:123 flags 0x811 failed: Cannot assign requested address Sep 16 04:39:16.631340 ntpd[1976]: 16 Sep 04:39:16 ntpd[1976]: unable to create socket on eth0 (5) for [fe80::474:59ff:fecf:a31b%2]:123 Sep 16 04:39:16.629752 ntpd[1976]: Listen normally on 3 eth0 172.31.31.59:123 Sep 16 04:39:16.629801 ntpd[1976]: Listen normally on 4 lo [::1]:123 Sep 16 04:39:16.629847 ntpd[1976]: bind(21) AF_INET6 [fe80::474:59ff:fecf:a31b%2]:123 flags 0x811 failed: Cannot assign requested address Sep 16 04:39:16.629884 ntpd[1976]: unable to create socket on eth0 (5) for [fe80::474:59ff:fecf:a31b%2]:123 Sep 16 04:39:16.640712 systemd-coredump[2013]: Process 1976 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Sep 16 04:39:16.642890 extend-filesystems[1973]: Found /dev/nvme0n1p9 Sep 16 04:39:16.651853 extend-filesystems[1973]: Checking size of /dev/nvme0n1p9 Sep 16 04:39:16.667766 (ntainerd)[2009]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 04:39:16.668850 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Sep 16 04:39:16.682338 systemd[1]: Started systemd-coredump@0-2013-0.service - Process Core Dump (PID 2013/UID 0). Sep 16 04:39:16.686055 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 04:39:16.689368 dbus-daemon[1970]: [system] SELinux support is enabled Sep 16 04:39:16.687066 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 04:39:16.706576 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 04:39:16.716298 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 04:39:16.716370 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 04:39:16.720692 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 04:39:16.720732 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 04:39:16.756435 extend-filesystems[1973]: Resized partition /dev/nvme0n1p9 Sep 16 04:39:16.766896 extend-filesystems[2026]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 04:39:16.775718 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 16 04:39:16.786720 dbus-daemon[1970]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1898 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 16 04:39:16.786908 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 16 04:39:16.793644 jq[2011]: true Sep 16 04:39:16.805395 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 16 04:39:16.829522 coreos-metadata[1969]: Sep 16 04:39:16.828 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 16 04:39:16.845146 update_engine[1983]: I20250916 04:39:16.840210 1983 main.cc:92] Flatcar Update Engine starting Sep 16 04:39:16.845878 coreos-metadata[1969]: Sep 16 04:39:16.845 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 16 04:39:16.848277 coreos-metadata[1969]: Sep 16 04:39:16.848 INFO Fetch successful Sep 16 04:39:16.848277 coreos-metadata[1969]: Sep 16 04:39:16.848 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 16 04:39:16.851226 coreos-metadata[1969]: Sep 16 04:39:16.851 INFO Fetch successful Sep 16 04:39:16.851226 coreos-metadata[1969]: Sep 16 04:39:16.851 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 16 04:39:16.860039 coreos-metadata[1969]: Sep 16 04:39:16.859 INFO Fetch successful Sep 16 04:39:16.860039 coreos-metadata[1969]: Sep 16 04:39:16.859 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 16 04:39:16.862263 systemd[1]: Started update-engine.service - Update Engine. Sep 16 04:39:16.864942 update_engine[1983]: I20250916 04:39:16.864876 1983 update_check_scheduler.cc:74] Next update check in 6m32s Sep 16 04:39:16.866015 coreos-metadata[1969]: Sep 16 04:39:16.865 INFO Fetch successful Sep 16 04:39:16.866015 coreos-metadata[1969]: Sep 16 04:39:16.865 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 16 04:39:16.867539 coreos-metadata[1969]: Sep 16 04:39:16.867 INFO Fetch failed with 404: resource not found Sep 16 04:39:16.867985 coreos-metadata[1969]: Sep 16 04:39:16.867 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 16 04:39:16.869071 coreos-metadata[1969]: Sep 16 04:39:16.868 INFO Fetch successful Sep 16 04:39:16.869505 coreos-metadata[1969]: Sep 16 04:39:16.869 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 16 04:39:16.872260 coreos-metadata[1969]: Sep 16 04:39:16.872 INFO Fetch successful Sep 16 04:39:16.872260 coreos-metadata[1969]: Sep 16 04:39:16.872 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 16 04:39:16.873752 coreos-metadata[1969]: Sep 16 04:39:16.873 INFO Fetch successful Sep 16 04:39:16.873752 coreos-metadata[1969]: Sep 16 04:39:16.873 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 16 04:39:16.876541 coreos-metadata[1969]: Sep 16 04:39:16.876 INFO Fetch successful Sep 16 04:39:16.876541 coreos-metadata[1969]: Sep 16 04:39:16.876 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 16 04:39:16.877318 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 04:39:16.883538 coreos-metadata[1969]: Sep 16 04:39:16.883 INFO Fetch successful Sep 16 04:39:16.927435 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 16 04:39:16.954708 extend-filesystems[2026]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 16 04:39:16.954708 extend-filesystems[2026]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 16 04:39:16.954708 extend-filesystems[2026]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 16 04:39:16.967296 extend-filesystems[1973]: Resized filesystem in /dev/nvme0n1p9 Sep 16 04:39:16.964011 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 04:39:16.969167 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 04:39:17.073477 bash[2058]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:39:17.075899 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 04:39:17.087834 systemd[1]: Starting sshkeys.service... Sep 16 04:39:17.102704 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 16 04:39:17.105850 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 04:39:17.187233 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 16 04:39:17.197902 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 16 04:39:17.249353 systemd-logind[1982]: Watching system buttons on /dev/input/event0 (Power Button) Sep 16 04:39:17.257546 systemd-logind[1982]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 16 04:39:17.258106 systemd-logind[1982]: New seat seat0. Sep 16 04:39:17.261318 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 04:39:17.478298 containerd[2009]: time="2025-09-16T04:39:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 04:39:17.482613 containerd[2009]: time="2025-09-16T04:39:17.482280730Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 04:39:17.529443 containerd[2009]: time="2025-09-16T04:39:17.528000706Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.52µs" Sep 16 04:39:17.529443 containerd[2009]: time="2025-09-16T04:39:17.528061198Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 04:39:17.529443 containerd[2009]: time="2025-09-16T04:39:17.528096550Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 04:39:17.529443 containerd[2009]: time="2025-09-16T04:39:17.528444766Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 04:39:17.529443 containerd[2009]: time="2025-09-16T04:39:17.528479434Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 04:39:17.529443 containerd[2009]: time="2025-09-16T04:39:17.528528214Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:39:17.529443 containerd[2009]: time="2025-09-16T04:39:17.528637474Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:39:17.529443 containerd[2009]: time="2025-09-16T04:39:17.528662938Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:39:17.529443 containerd[2009]: time="2025-09-16T04:39:17.529016362Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:39:17.529443 containerd[2009]: time="2025-09-16T04:39:17.529042978Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:39:17.529443 containerd[2009]: time="2025-09-16T04:39:17.529081846Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:39:17.529443 containerd[2009]: time="2025-09-16T04:39:17.529105510Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 04:39:17.529979 containerd[2009]: time="2025-09-16T04:39:17.529252942Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 04:39:17.537810 containerd[2009]: time="2025-09-16T04:39:17.536797282Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:39:17.537810 containerd[2009]: time="2025-09-16T04:39:17.536884534Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:39:17.537810 containerd[2009]: time="2025-09-16T04:39:17.536912290Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 04:39:17.537810 containerd[2009]: time="2025-09-16T04:39:17.536987590Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 04:39:17.538181 containerd[2009]: time="2025-09-16T04:39:17.538144798Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 04:39:17.540437 containerd[2009]: time="2025-09-16T04:39:17.539975110Z" level=info msg="metadata content store policy set" policy=shared Sep 16 04:39:17.562428 containerd[2009]: time="2025-09-16T04:39:17.558567814Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 04:39:17.562428 containerd[2009]: time="2025-09-16T04:39:17.558786922Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 04:39:17.562428 containerd[2009]: time="2025-09-16T04:39:17.558826558Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 04:39:17.562428 containerd[2009]: time="2025-09-16T04:39:17.558856042Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 04:39:17.562428 containerd[2009]: time="2025-09-16T04:39:17.558887026Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 04:39:17.562428 containerd[2009]: time="2025-09-16T04:39:17.558913894Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 04:39:17.562428 containerd[2009]: time="2025-09-16T04:39:17.558943378Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 04:39:17.562428 containerd[2009]: time="2025-09-16T04:39:17.558971962Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 04:39:17.562428 containerd[2009]: time="2025-09-16T04:39:17.559000078Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 04:39:17.562428 containerd[2009]: time="2025-09-16T04:39:17.559025194Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 04:39:17.562428 containerd[2009]: time="2025-09-16T04:39:17.559052218Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 04:39:17.562428 containerd[2009]: time="2025-09-16T04:39:17.559082818Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 04:39:17.562428 containerd[2009]: time="2025-09-16T04:39:17.559340662Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 04:39:17.562428 containerd[2009]: time="2025-09-16T04:39:17.559377598Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 04:39:17.563062 containerd[2009]: time="2025-09-16T04:39:17.559429714Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 04:39:17.563062 containerd[2009]: time="2025-09-16T04:39:17.559459582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 04:39:17.563062 containerd[2009]: time="2025-09-16T04:39:17.559488598Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 04:39:17.563062 containerd[2009]: time="2025-09-16T04:39:17.559516462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 04:39:17.563062 containerd[2009]: time="2025-09-16T04:39:17.559544770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 04:39:17.563062 containerd[2009]: time="2025-09-16T04:39:17.559573366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 04:39:17.563062 containerd[2009]: time="2025-09-16T04:39:17.559602166Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 04:39:17.563062 containerd[2009]: time="2025-09-16T04:39:17.559629406Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 04:39:17.563062 containerd[2009]: time="2025-09-16T04:39:17.559655938Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 04:39:17.563062 containerd[2009]: time="2025-09-16T04:39:17.560022670Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 04:39:17.563062 containerd[2009]: time="2025-09-16T04:39:17.560058658Z" level=info msg="Start snapshots syncer" Sep 16 04:39:17.563062 containerd[2009]: time="2025-09-16T04:39:17.560098594Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 04:39:17.567329 containerd[2009]: time="2025-09-16T04:39:17.567188098Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 04:39:17.568641 containerd[2009]: time="2025-09-16T04:39:17.567597646Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 04:39:17.573615 containerd[2009]: time="2025-09-16T04:39:17.570016006Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 04:39:17.574746 containerd[2009]: time="2025-09-16T04:39:17.574076470Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 04:39:17.574746 containerd[2009]: time="2025-09-16T04:39:17.574180918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 04:39:17.574746 containerd[2009]: time="2025-09-16T04:39:17.574236514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 04:39:17.574746 containerd[2009]: time="2025-09-16T04:39:17.574272886Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 04:39:17.574746 containerd[2009]: time="2025-09-16T04:39:17.574327738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 04:39:17.574746 containerd[2009]: time="2025-09-16T04:39:17.574361194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 04:39:17.574746 containerd[2009]: time="2025-09-16T04:39:17.574430134Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 04:39:17.574746 containerd[2009]: time="2025-09-16T04:39:17.574515502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 04:39:17.574746 containerd[2009]: time="2025-09-16T04:39:17.574547218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 04:39:17.574746 containerd[2009]: time="2025-09-16T04:39:17.574612186Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 04:39:17.575805 containerd[2009]: time="2025-09-16T04:39:17.574719754Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:39:17.575805 containerd[2009]: time="2025-09-16T04:39:17.575306206Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:39:17.575805 containerd[2009]: time="2025-09-16T04:39:17.575724478Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:39:17.575805 containerd[2009]: time="2025-09-16T04:39:17.575763874Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:39:17.576034 containerd[2009]: time="2025-09-16T04:39:17.576006142Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 04:39:17.576203 containerd[2009]: time="2025-09-16T04:39:17.576157282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 04:39:17.576483 containerd[2009]: time="2025-09-16T04:39:17.576451522Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 04:39:17.578677 containerd[2009]: time="2025-09-16T04:39:17.577656502Z" level=info msg="runtime interface created" Sep 16 04:39:17.578677 containerd[2009]: time="2025-09-16T04:39:17.577690066Z" level=info msg="created NRI interface" Sep 16 04:39:17.584579 containerd[2009]: time="2025-09-16T04:39:17.577725178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 04:39:17.584579 containerd[2009]: time="2025-09-16T04:39:17.581888518Z" level=info msg="Connect containerd service" Sep 16 04:39:17.584579 containerd[2009]: time="2025-09-16T04:39:17.582010738Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 04:39:17.589492 containerd[2009]: time="2025-09-16T04:39:17.589038958Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:39:17.730742 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 16 04:39:17.764038 dbus-daemon[1970]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 16 04:39:17.772812 dbus-daemon[1970]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2029 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 16 04:39:17.798247 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 04:39:17.806493 coreos-metadata[2083]: Sep 16 04:39:17.805 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 16 04:39:17.806493 coreos-metadata[2083]: Sep 16 04:39:17.806 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 16 04:39:17.807815 systemd[1]: Starting polkit.service - Authorization Manager... Sep 16 04:39:17.815063 coreos-metadata[2083]: Sep 16 04:39:17.814 INFO Fetch successful Sep 16 04:39:17.819326 coreos-metadata[2083]: Sep 16 04:39:17.818 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 16 04:39:17.820011 coreos-metadata[2083]: Sep 16 04:39:17.819 INFO Fetch successful Sep 16 04:39:17.826091 unknown[2083]: wrote ssh authorized keys file for user: core Sep 16 04:39:17.894565 systemd-networkd[1898]: eth0: Gained IPv6LL Sep 16 04:39:17.907694 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 04:39:17.925998 update-ssh-keys[2174]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:39:17.911653 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 04:39:17.921613 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 16 04:39:17.930339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:39:17.938948 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 04:39:17.944997 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 16 04:39:17.957511 systemd[1]: Finished sshkeys.service. Sep 16 04:39:18.016336 systemd-coredump[2018]: Process 1976 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1976: #0 0x0000aaaab19d0b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaab197fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaab1980240 n/a (ntpd + 0x10240) #3 0x0000aaaab197be14 n/a (ntpd + 0xbe14) #4 0x0000aaaab197d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaab1985a38 n/a (ntpd + 0x15a38) #6 0x0000aaaab197738c n/a (ntpd + 0x738c) #7 0x0000ffffb5522034 n/a (libc.so.6 + 0x22034) #8 0x0000ffffb5522118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaab19773f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Sep 16 04:39:18.028484 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Sep 16 04:39:18.028786 systemd[1]: ntpd.service: Failed with result 'core-dump'. Sep 16 04:39:18.050296 systemd[1]: systemd-coredump@0-2013-0.service: Deactivated successfully. Sep 16 04:39:18.120490 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 04:39:18.187331 locksmithd[2038]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 04:39:18.192300 containerd[2009]: time="2025-09-16T04:39:18.191478585Z" level=info msg="Start subscribing containerd event" Sep 16 04:39:18.193730 containerd[2009]: time="2025-09-16T04:39:18.193681773Z" level=info msg="Start recovering state" Sep 16 04:39:18.194423 containerd[2009]: time="2025-09-16T04:39:18.192161133Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 04:39:18.194423 containerd[2009]: time="2025-09-16T04:39:18.194062413Z" level=info msg="Start event monitor" Sep 16 04:39:18.194423 containerd[2009]: time="2025-09-16T04:39:18.194087589Z" level=info msg="Start cni network conf syncer for default" Sep 16 04:39:18.194423 containerd[2009]: time="2025-09-16T04:39:18.194106309Z" level=info msg="Start streaming server" Sep 16 04:39:18.194423 containerd[2009]: time="2025-09-16T04:39:18.194127753Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 04:39:18.194423 containerd[2009]: time="2025-09-16T04:39:18.194144301Z" level=info msg="runtime interface starting up..." Sep 16 04:39:18.194423 containerd[2009]: time="2025-09-16T04:39:18.194165589Z" level=info msg="starting plugins..." Sep 16 04:39:18.194423 containerd[2009]: time="2025-09-16T04:39:18.194192553Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 04:39:18.195735 containerd[2009]: time="2025-09-16T04:39:18.194997453Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 04:39:18.195735 containerd[2009]: time="2025-09-16T04:39:18.195121869Z" level=info msg="containerd successfully booted in 0.717503s" Sep 16 04:39:18.195509 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 04:39:18.199438 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Sep 16 04:39:18.203593 systemd[1]: Started ntpd.service - Network Time Service. Sep 16 04:39:18.254003 amazon-ssm-agent[2178]: Initializing new seelog logger Sep 16 04:39:18.256444 amazon-ssm-agent[2178]: New Seelog Logger Creation Complete Sep 16 04:39:18.256444 amazon-ssm-agent[2178]: 2025/09/16 04:39:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:39:18.256444 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:39:18.256444 amazon-ssm-agent[2178]: 2025/09/16 04:39:18 processing appconfig overrides Sep 16 04:39:18.256444 amazon-ssm-agent[2178]: 2025/09/16 04:39:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:39:18.256444 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:39:18.256444 amazon-ssm-agent[2178]: 2025/09/16 04:39:18 processing appconfig overrides Sep 16 04:39:18.256444 amazon-ssm-agent[2178]: 2025/09/16 04:39:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:39:18.256444 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:39:18.256444 amazon-ssm-agent[2178]: 2025/09/16 04:39:18 processing appconfig overrides Sep 16 04:39:18.256896 amazon-ssm-agent[2178]: 2025-09-16 04:39:18.2558 INFO Proxy environment variables: Sep 16 04:39:18.262351 amazon-ssm-agent[2178]: 2025/09/16 04:39:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:39:18.262516 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:39:18.263816 amazon-ssm-agent[2178]: 2025/09/16 04:39:18 processing appconfig overrides Sep 16 04:39:18.347216 ntpd[2210]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:29 UTC 2025 (1): Starting Sep 16 04:39:18.348323 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:29 UTC 2025 (1): Starting Sep 16 04:39:18.348323 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 04:39:18.348323 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: ---------------------------------------------------- Sep 16 04:39:18.348323 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: ntp-4 is maintained by Network Time Foundation, Sep 16 04:39:18.348323 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 04:39:18.347328 ntpd[2210]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 04:39:18.347347 ntpd[2210]: ---------------------------------------------------- Sep 16 04:39:18.347365 ntpd[2210]: ntp-4 is maintained by Network Time Foundation, Sep 16 04:39:18.347382 ntpd[2210]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 04:39:18.347398 ntpd[2210]: corporation. Support and training for ntp-4 are Sep 16 04:39:18.352426 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: corporation. Support and training for ntp-4 are Sep 16 04:39:18.352426 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: available at https://www.nwtime.org/support Sep 16 04:39:18.352426 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: ---------------------------------------------------- Sep 16 04:39:18.352426 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: proto: precision = 0.096 usec (-23) Sep 16 04:39:18.351010 ntpd[2210]: available at https://www.nwtime.org/support Sep 16 04:39:18.351032 ntpd[2210]: ---------------------------------------------------- Sep 16 04:39:18.352097 ntpd[2210]: proto: precision = 0.096 usec (-23) Sep 16 04:39:18.355954 ntpd[2210]: basedate set to 2025-09-04 Sep 16 04:39:18.356008 ntpd[2210]: gps base set to 2025-09-07 (week 2383) Sep 16 04:39:18.356219 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: basedate set to 2025-09-04 Sep 16 04:39:18.356219 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: gps base set to 2025-09-07 (week 2383) Sep 16 04:39:18.356219 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 04:39:18.356149 ntpd[2210]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 04:39:18.358495 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 04:39:18.356214 ntpd[2210]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 04:39:18.358657 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 04:39:18.358657 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: Listen normally on 3 eth0 172.31.31.59:123 Sep 16 04:39:18.358546 ntpd[2210]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 04:39:18.358796 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: Listen normally on 4 lo [::1]:123 Sep 16 04:39:18.358796 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: Listen normally on 5 eth0 [fe80::474:59ff:fecf:a31b%2]:123 Sep 16 04:39:18.358796 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: Listening on routing socket on fd #22 for interface updates Sep 16 04:39:18.358602 ntpd[2210]: Listen normally on 3 eth0 172.31.31.59:123 Sep 16 04:39:18.358650 ntpd[2210]: Listen normally on 4 lo [::1]:123 Sep 16 04:39:18.358694 ntpd[2210]: Listen normally on 5 eth0 [fe80::474:59ff:fecf:a31b%2]:123 Sep 16 04:39:18.358737 ntpd[2210]: Listening on routing socket on fd #22 for interface updates Sep 16 04:39:18.368454 amazon-ssm-agent[2178]: 2025-09-16 04:39:18.2558 INFO https_proxy: Sep 16 04:39:18.383469 ntpd[2210]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 04:39:18.384062 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 04:39:18.384062 ntpd[2210]: 16 Sep 04:39:18 ntpd[2210]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 04:39:18.383527 ntpd[2210]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 04:39:18.446239 polkitd[2169]: Started polkitd version 126 Sep 16 04:39:18.465868 amazon-ssm-agent[2178]: 2025-09-16 04:39:18.2558 INFO http_proxy: Sep 16 04:39:18.471254 polkitd[2169]: Loading rules from directory /etc/polkit-1/rules.d Sep 16 04:39:18.473905 polkitd[2169]: Loading rules from directory /run/polkit-1/rules.d Sep 16 04:39:18.474534 polkitd[2169]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 16 04:39:18.475160 polkitd[2169]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 16 04:39:18.475209 polkitd[2169]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 16 04:39:18.475291 polkitd[2169]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 16 04:39:18.480752 polkitd[2169]: Finished loading, compiling and executing 2 rules Sep 16 04:39:18.481277 systemd[1]: Started polkit.service - Authorization Manager. Sep 16 04:39:18.487813 dbus-daemon[1970]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 16 04:39:18.491823 polkitd[2169]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 16 04:39:18.532089 systemd-hostnamed[2029]: Hostname set to (transient) Sep 16 04:39:18.532280 systemd-resolved[1899]: System hostname changed to 'ip-172-31-31-59'. Sep 16 04:39:18.565422 amazon-ssm-agent[2178]: 2025-09-16 04:39:18.2558 INFO no_proxy: Sep 16 04:39:18.662569 amazon-ssm-agent[2178]: 2025-09-16 04:39:18.2560 INFO Checking if agent identity type OnPrem can be assumed Sep 16 04:39:18.765502 amazon-ssm-agent[2178]: 2025-09-16 04:39:18.2560 INFO Checking if agent identity type EC2 can be assumed Sep 16 04:39:18.867424 amazon-ssm-agent[2178]: 2025-09-16 04:39:18.4091 INFO Agent will take identity from EC2 Sep 16 04:39:18.913487 tar[1989]: linux-arm64/README.md Sep 16 04:39:18.954493 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 04:39:18.967436 amazon-ssm-agent[2178]: 2025-09-16 04:39:18.4255 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 16 04:39:19.066481 amazon-ssm-agent[2178]: 2025-09-16 04:39:18.4255 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 16 04:39:19.166039 amazon-ssm-agent[2178]: 2025-09-16 04:39:18.4255 INFO [amazon-ssm-agent] Starting Core Agent Sep 16 04:39:19.265879 amazon-ssm-agent[2178]: 2025-09-16 04:39:18.4255 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 16 04:39:19.366122 amazon-ssm-agent[2178]: 2025-09-16 04:39:18.4256 INFO [Registrar] Starting registrar module Sep 16 04:39:19.466757 amazon-ssm-agent[2178]: 2025-09-16 04:39:18.4326 INFO [EC2Identity] Checking disk for registration info Sep 16 04:39:19.567987 amazon-ssm-agent[2178]: 2025-09-16 04:39:18.4326 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 16 04:39:19.587992 sshd_keygen[2017]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 04:39:19.630210 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 04:39:19.638386 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 04:39:19.640589 amazon-ssm-agent[2178]: 2025/09/16 04:39:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:39:19.640589 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:39:19.640589 amazon-ssm-agent[2178]: 2025/09/16 04:39:19 processing appconfig overrides Sep 16 04:39:19.647803 systemd[1]: Started sshd@0-172.31.31.59:22-147.75.109.163:39420.service - OpenSSH per-connection server daemon (147.75.109.163:39420). Sep 16 04:39:19.669001 amazon-ssm-agent[2178]: 2025-09-16 04:39:18.4326 INFO [EC2Identity] Generating registration keypair Sep 16 04:39:19.684325 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 04:39:19.685050 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 04:39:19.694176 amazon-ssm-agent[2178]: 2025-09-16 04:39:19.5904 INFO [EC2Identity] Checking write access before registering Sep 16 04:39:19.694176 amazon-ssm-agent[2178]: 2025-09-16 04:39:19.5912 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 16 04:39:19.694176 amazon-ssm-agent[2178]: 2025-09-16 04:39:19.6392 INFO [EC2Identity] EC2 registration was successful. Sep 16 04:39:19.694176 amazon-ssm-agent[2178]: 2025-09-16 04:39:19.6393 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 16 04:39:19.694176 amazon-ssm-agent[2178]: 2025-09-16 04:39:19.6395 INFO [CredentialRefresher] credentialRefresher has started Sep 16 04:39:19.694176 amazon-ssm-agent[2178]: 2025-09-16 04:39:19.6395 INFO [CredentialRefresher] Starting credentials refresher loop Sep 16 04:39:19.694176 amazon-ssm-agent[2178]: 2025-09-16 04:39:19.6936 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 16 04:39:19.694176 amazon-ssm-agent[2178]: 2025-09-16 04:39:19.6938 INFO [CredentialRefresher] Credentials ready Sep 16 04:39:19.697712 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 04:39:19.732475 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 04:39:19.739870 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 04:39:19.745936 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 16 04:39:19.748800 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 04:39:19.769711 amazon-ssm-agent[2178]: 2025-09-16 04:39:19.6940 INFO [CredentialRefresher] Next credential rotation will be in 29.9999925648 minutes Sep 16 04:39:19.909116 sshd[2237]: Accepted publickey for core from 147.75.109.163 port 39420 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:39:19.913104 sshd-session[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:19.928898 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 04:39:19.936881 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 04:39:19.966853 systemd-logind[1982]: New session 1 of user core. Sep 16 04:39:19.986530 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 04:39:19.995170 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 04:39:20.017008 (systemd)[2250]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 04:39:20.021980 systemd-logind[1982]: New session c1 of user core. Sep 16 04:39:20.149655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:39:20.153343 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 04:39:20.166948 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:39:20.324917 systemd[2250]: Queued start job for default target default.target. Sep 16 04:39:20.332680 systemd[2250]: Created slice app.slice - User Application Slice. Sep 16 04:39:20.332748 systemd[2250]: Reached target paths.target - Paths. Sep 16 04:39:20.332835 systemd[2250]: Reached target timers.target - Timers. Sep 16 04:39:20.335632 systemd[2250]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 04:39:20.365187 systemd[2250]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 04:39:20.365449 systemd[2250]: Reached target sockets.target - Sockets. Sep 16 04:39:20.365543 systemd[2250]: Reached target basic.target - Basic System. Sep 16 04:39:20.365624 systemd[2250]: Reached target default.target - Main User Target. Sep 16 04:39:20.365690 systemd[2250]: Startup finished in 324ms. Sep 16 04:39:20.366357 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 04:39:20.376667 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 04:39:20.380506 systemd[1]: Startup finished in 3.639s (kernel) + 8.605s (initrd) + 9.151s (userspace) = 21.397s. Sep 16 04:39:20.541870 systemd[1]: Started sshd@1-172.31.31.59:22-147.75.109.163:40380.service - OpenSSH per-connection server daemon (147.75.109.163:40380). Sep 16 04:39:20.724548 amazon-ssm-agent[2178]: 2025-09-16 04:39:20.7237 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 16 04:39:20.748429 sshd[2275]: Accepted publickey for core from 147.75.109.163 port 40380 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:39:20.750112 sshd-session[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:20.767853 systemd-logind[1982]: New session 2 of user core. Sep 16 04:39:20.779706 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 04:39:20.825802 amazon-ssm-agent[2178]: 2025-09-16 04:39:20.7414 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2280) started Sep 16 04:39:20.912300 sshd[2281]: Connection closed by 147.75.109.163 port 40380 Sep 16 04:39:20.912093 sshd-session[2275]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:20.926311 systemd[1]: sshd@1-172.31.31.59:22-147.75.109.163:40380.service: Deactivated successfully. Sep 16 04:39:20.926596 amazon-ssm-agent[2178]: 2025-09-16 04:39:20.7416 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 16 04:39:20.931038 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 04:39:20.937454 systemd-logind[1982]: Session 2 logged out. Waiting for processes to exit. Sep 16 04:39:21.026428 systemd[1]: Started sshd@2-172.31.31.59:22-147.75.109.163:40396.service - OpenSSH per-connection server daemon (147.75.109.163:40396). Sep 16 04:39:21.030643 systemd-logind[1982]: Removed session 2. Sep 16 04:39:21.248535 sshd[2292]: Accepted publickey for core from 147.75.109.163 port 40396 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:39:21.251533 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:21.262502 systemd-logind[1982]: New session 3 of user core. Sep 16 04:39:21.268694 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 04:39:21.315898 kubelet[2261]: E0916 04:39:21.315732 2261 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:39:21.320904 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:39:21.321209 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:39:21.322583 systemd[1]: kubelet.service: Consumed 1.446s CPU time, 258M memory peak. Sep 16 04:39:21.387499 sshd[2304]: Connection closed by 147.75.109.163 port 40396 Sep 16 04:39:21.388232 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:21.394632 systemd[1]: sshd@2-172.31.31.59:22-147.75.109.163:40396.service: Deactivated successfully. Sep 16 04:39:21.398471 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 04:39:21.400341 systemd-logind[1982]: Session 3 logged out. Waiting for processes to exit. Sep 16 04:39:21.403441 systemd-logind[1982]: Removed session 3. Sep 16 04:39:21.436869 systemd[1]: Started sshd@3-172.31.31.59:22-147.75.109.163:40398.service - OpenSSH per-connection server daemon (147.75.109.163:40398). Sep 16 04:39:21.626351 sshd[2311]: Accepted publickey for core from 147.75.109.163 port 40398 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:39:21.627369 sshd-session[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:21.635487 systemd-logind[1982]: New session 4 of user core. Sep 16 04:39:21.642652 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 04:39:21.765361 sshd[2314]: Connection closed by 147.75.109.163 port 40398 Sep 16 04:39:21.766344 sshd-session[2311]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:21.773055 systemd-logind[1982]: Session 4 logged out. Waiting for processes to exit. Sep 16 04:39:21.774643 systemd[1]: sshd@3-172.31.31.59:22-147.75.109.163:40398.service: Deactivated successfully. Sep 16 04:39:21.778502 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 04:39:21.782777 systemd-logind[1982]: Removed session 4. Sep 16 04:39:21.801480 systemd[1]: Started sshd@4-172.31.31.59:22-147.75.109.163:40404.service - OpenSSH per-connection server daemon (147.75.109.163:40404). Sep 16 04:39:21.994228 sshd[2320]: Accepted publickey for core from 147.75.109.163 port 40404 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:39:21.996783 sshd-session[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:22.009566 systemd-logind[1982]: New session 5 of user core. Sep 16 04:39:22.022584 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 04:39:22.146904 sudo[2324]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 04:39:22.147552 sudo[2324]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:39:22.162272 sudo[2324]: pam_unix(sudo:session): session closed for user root Sep 16 04:39:22.187451 sshd[2323]: Connection closed by 147.75.109.163 port 40404 Sep 16 04:39:22.187025 sshd-session[2320]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:22.194094 systemd[1]: sshd@4-172.31.31.59:22-147.75.109.163:40404.service: Deactivated successfully. Sep 16 04:39:22.198184 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 04:39:22.200879 systemd-logind[1982]: Session 5 logged out. Waiting for processes to exit. Sep 16 04:39:22.204016 systemd-logind[1982]: Removed session 5. Sep 16 04:39:22.219595 systemd[1]: Started sshd@5-172.31.31.59:22-147.75.109.163:40406.service - OpenSSH per-connection server daemon (147.75.109.163:40406). Sep 16 04:39:22.426543 sshd[2330]: Accepted publickey for core from 147.75.109.163 port 40406 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:39:22.428811 sshd-session[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:22.436535 systemd-logind[1982]: New session 6 of user core. Sep 16 04:39:22.447644 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 04:39:22.550918 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 04:39:22.551600 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:39:22.562110 sudo[2335]: pam_unix(sudo:session): session closed for user root Sep 16 04:39:22.571951 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 04:39:22.573067 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:39:22.588985 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:39:22.656719 augenrules[2357]: No rules Sep 16 04:39:22.659222 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:39:22.659841 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:39:22.662082 sudo[2334]: pam_unix(sudo:session): session closed for user root Sep 16 04:39:22.685423 sshd[2333]: Connection closed by 147.75.109.163 port 40406 Sep 16 04:39:22.685803 sshd-session[2330]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:22.694237 systemd[1]: sshd@5-172.31.31.59:22-147.75.109.163:40406.service: Deactivated successfully. Sep 16 04:39:22.698641 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 04:39:22.701525 systemd-logind[1982]: Session 6 logged out. Waiting for processes to exit. Sep 16 04:39:22.703835 systemd-logind[1982]: Removed session 6. Sep 16 04:39:22.722844 systemd[1]: Started sshd@6-172.31.31.59:22-147.75.109.163:40422.service - OpenSSH per-connection server daemon (147.75.109.163:40422). Sep 16 04:39:22.921887 sshd[2366]: Accepted publickey for core from 147.75.109.163 port 40422 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:39:22.924128 sshd-session[2366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:22.933200 systemd-logind[1982]: New session 7 of user core. Sep 16 04:39:22.936646 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 04:39:23.039588 sudo[2370]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 04:39:23.040230 sudo[2370]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:39:23.583111 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 04:39:23.598166 (dockerd)[2387]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 04:39:23.977724 dockerd[2387]: time="2025-09-16T04:39:23.977560098Z" level=info msg="Starting up" Sep 16 04:39:23.979660 dockerd[2387]: time="2025-09-16T04:39:23.979620054Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 04:39:23.999747 dockerd[2387]: time="2025-09-16T04:39:23.999690738Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 04:39:24.102470 systemd[1]: var-lib-docker-metacopy\x2dcheck1801039746-merged.mount: Deactivated successfully. Sep 16 04:39:24.114797 dockerd[2387]: time="2025-09-16T04:39:24.114542715Z" level=info msg="Loading containers: start." Sep 16 04:39:24.130487 kernel: Initializing XFRM netlink socket Sep 16 04:39:24.475196 (udev-worker)[2408]: Network interface NamePolicy= disabled on kernel command line. Sep 16 04:39:24.549315 systemd-networkd[1898]: docker0: Link UP Sep 16 04:39:24.561693 dockerd[2387]: time="2025-09-16T04:39:24.561624197Z" level=info msg="Loading containers: done." Sep 16 04:39:24.594659 dockerd[2387]: time="2025-09-16T04:39:24.594587405Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 04:39:24.594872 dockerd[2387]: time="2025-09-16T04:39:24.594712877Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 04:39:24.594872 dockerd[2387]: time="2025-09-16T04:39:24.594861077Z" level=info msg="Initializing buildkit" Sep 16 04:39:24.645554 dockerd[2387]: time="2025-09-16T04:39:24.645489605Z" level=info msg="Completed buildkit initialization" Sep 16 04:39:24.660176 dockerd[2387]: time="2025-09-16T04:39:24.660076890Z" level=info msg="Daemon has completed initialization" Sep 16 04:39:24.660576 dockerd[2387]: time="2025-09-16T04:39:24.660358290Z" level=info msg="API listen on /run/docker.sock" Sep 16 04:39:24.660899 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 04:39:25.099428 systemd-resolved[1899]: Clock change detected. Flushing caches. Sep 16 04:39:25.573810 containerd[2009]: time="2025-09-16T04:39:25.573744774Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 16 04:39:26.229850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount846868047.mount: Deactivated successfully. Sep 16 04:39:27.685354 containerd[2009]: time="2025-09-16T04:39:27.685299164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:27.687693 containerd[2009]: time="2025-09-16T04:39:27.687614828Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363685" Sep 16 04:39:27.688029 containerd[2009]: time="2025-09-16T04:39:27.687982076Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:27.693744 containerd[2009]: time="2025-09-16T04:39:27.693626384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:27.695735 containerd[2009]: time="2025-09-16T04:39:27.695672132Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 2.121863842s" Sep 16 04:39:27.695897 containerd[2009]: time="2025-09-16T04:39:27.695734424Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 16 04:39:27.697531 containerd[2009]: time="2025-09-16T04:39:27.697446008Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 16 04:39:29.399678 containerd[2009]: time="2025-09-16T04:39:29.398663913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:29.401603 containerd[2009]: time="2025-09-16T04:39:29.401534769Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531200" Sep 16 04:39:29.402934 containerd[2009]: time="2025-09-16T04:39:29.402864933Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:29.407071 containerd[2009]: time="2025-09-16T04:39:29.406989573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:29.409173 containerd[2009]: time="2025-09-16T04:39:29.408932025Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.711428477s" Sep 16 04:39:29.409173 containerd[2009]: time="2025-09-16T04:39:29.408991833Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 16 04:39:29.410665 containerd[2009]: time="2025-09-16T04:39:29.409901997Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 16 04:39:30.615107 containerd[2009]: time="2025-09-16T04:39:30.615038987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:30.616826 containerd[2009]: time="2025-09-16T04:39:30.616774319Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484324" Sep 16 04:39:30.618451 containerd[2009]: time="2025-09-16T04:39:30.617577899Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:30.622224 containerd[2009]: time="2025-09-16T04:39:30.622174883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:30.624276 containerd[2009]: time="2025-09-16T04:39:30.624214883Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.214244066s" Sep 16 04:39:30.624389 containerd[2009]: time="2025-09-16T04:39:30.624273947Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 16 04:39:30.625121 containerd[2009]: time="2025-09-16T04:39:30.625041023Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 16 04:39:31.304047 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 04:39:31.307386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:39:31.716377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:39:31.731511 (kubelet)[2677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:39:31.841675 kubelet[2677]: E0916 04:39:31.841579 2677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:39:31.849204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:39:31.849511 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:39:31.850972 systemd[1]: kubelet.service: Consumed 363ms CPU time, 108.1M memory peak. Sep 16 04:39:32.209036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3281508857.mount: Deactivated successfully. Sep 16 04:39:32.776670 containerd[2009]: time="2025-09-16T04:39:32.776566898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:32.779512 containerd[2009]: time="2025-09-16T04:39:32.779441534Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417817" Sep 16 04:39:32.781977 containerd[2009]: time="2025-09-16T04:39:32.781903286Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:32.786014 containerd[2009]: time="2025-09-16T04:39:32.785938766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:32.787396 containerd[2009]: time="2025-09-16T04:39:32.787140998Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 2.162040683s" Sep 16 04:39:32.787396 containerd[2009]: time="2025-09-16T04:39:32.787196618Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 16 04:39:32.787857 containerd[2009]: time="2025-09-16T04:39:32.787807862Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 16 04:39:33.357806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3020887846.mount: Deactivated successfully. Sep 16 04:39:34.548674 containerd[2009]: time="2025-09-16T04:39:34.547164866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:34.549779 containerd[2009]: time="2025-09-16T04:39:34.549734234Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 16 04:39:34.551333 containerd[2009]: time="2025-09-16T04:39:34.551278958Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:34.558151 containerd[2009]: time="2025-09-16T04:39:34.558104078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:34.561176 containerd[2009]: time="2025-09-16T04:39:34.561116186Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.773251948s" Sep 16 04:39:34.561304 containerd[2009]: time="2025-09-16T04:39:34.561174518Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 16 04:39:34.561878 containerd[2009]: time="2025-09-16T04:39:34.561804375Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 04:39:35.042730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2067825212.mount: Deactivated successfully. Sep 16 04:39:35.055480 containerd[2009]: time="2025-09-16T04:39:35.055402549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:39:35.059231 containerd[2009]: time="2025-09-16T04:39:35.059168713Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 16 04:39:35.061346 containerd[2009]: time="2025-09-16T04:39:35.061288621Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:39:35.067582 containerd[2009]: time="2025-09-16T04:39:35.067502509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:39:35.069244 containerd[2009]: time="2025-09-16T04:39:35.068794609Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 506.912594ms" Sep 16 04:39:35.069244 containerd[2009]: time="2025-09-16T04:39:35.068852233Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 16 04:39:35.069475 containerd[2009]: time="2025-09-16T04:39:35.069428233Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 16 04:39:35.681255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3588083839.mount: Deactivated successfully. Sep 16 04:39:38.204268 containerd[2009]: time="2025-09-16T04:39:38.204211901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:38.206808 containerd[2009]: time="2025-09-16T04:39:38.206766641Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 16 04:39:38.209406 containerd[2009]: time="2025-09-16T04:39:38.209361725Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:38.215049 containerd[2009]: time="2025-09-16T04:39:38.214981781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:38.217316 containerd[2009]: time="2025-09-16T04:39:38.217254461Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.147780076s" Sep 16 04:39:38.217407 containerd[2009]: time="2025-09-16T04:39:38.217312925Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 16 04:39:42.053906 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 04:39:42.059411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:39:42.418882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:39:42.432188 (kubelet)[2827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:39:42.514500 kubelet[2827]: E0916 04:39:42.514438 2827 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:39:42.519197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:39:42.519741 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:39:42.521800 systemd[1]: kubelet.service: Consumed 297ms CPU time, 106.9M memory peak. Sep 16 04:39:46.839251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:39:46.840085 systemd[1]: kubelet.service: Consumed 297ms CPU time, 106.9M memory peak. Sep 16 04:39:46.848996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:39:46.894242 systemd[1]: Reload requested from client PID 2841 ('systemctl') (unit session-7.scope)... Sep 16 04:39:46.894273 systemd[1]: Reloading... Sep 16 04:39:47.137704 zram_generator::config[2889]: No configuration found. Sep 16 04:39:47.604963 systemd[1]: Reloading finished in 710 ms. Sep 16 04:39:47.729935 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 04:39:47.730115 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 04:39:47.730662 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:39:47.730750 systemd[1]: kubelet.service: Consumed 226ms CPU time, 95M memory peak. Sep 16 04:39:47.733586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:39:48.069436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:39:48.090512 (kubelet)[2949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:39:48.174679 kubelet[2949]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:39:48.174679 kubelet[2949]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:39:48.175675 kubelet[2949]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:39:48.175675 kubelet[2949]: I0916 04:39:48.175357 2949 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:39:48.315687 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 16 04:39:49.346534 kubelet[2949]: I0916 04:39:49.346459 2949 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 16 04:39:49.346534 kubelet[2949]: I0916 04:39:49.346519 2949 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:39:49.348671 kubelet[2949]: I0916 04:39:49.347354 2949 server.go:954] "Client rotation is on, will bootstrap in background" Sep 16 04:39:49.406197 kubelet[2949]: E0916 04:39:49.406129 2949 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.59:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:39:49.408886 kubelet[2949]: I0916 04:39:49.408842 2949 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:39:49.421466 kubelet[2949]: I0916 04:39:49.421421 2949 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:39:49.427021 kubelet[2949]: I0916 04:39:49.426979 2949 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:39:49.428520 kubelet[2949]: I0916 04:39:49.428442 2949 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:39:49.428842 kubelet[2949]: I0916 04:39:49.428513 2949 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-59","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:39:49.429019 kubelet[2949]: I0916 04:39:49.428979 2949 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:39:49.429019 kubelet[2949]: I0916 04:39:49.429016 2949 container_manager_linux.go:304] "Creating device plugin manager" Sep 16 04:39:49.429383 kubelet[2949]: I0916 04:39:49.429341 2949 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:39:49.436438 kubelet[2949]: I0916 04:39:49.436372 2949 kubelet.go:446] "Attempting to sync node with API server" Sep 16 04:39:49.436438 kubelet[2949]: I0916 04:39:49.436431 2949 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:39:49.438707 kubelet[2949]: I0916 04:39:49.436477 2949 kubelet.go:352] "Adding apiserver pod source" Sep 16 04:39:49.438707 kubelet[2949]: I0916 04:39:49.436506 2949 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:39:49.447028 kubelet[2949]: W0916 04:39:49.446913 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.59:6443: connect: connection refused Sep 16 04:39:49.447191 kubelet[2949]: E0916 04:39:49.447068 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.59:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:39:49.449065 kubelet[2949]: W0916 04:39:49.448976 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-59&limit=500&resourceVersion=0": dial tcp 172.31.31.59:6443: connect: connection refused Sep 16 04:39:49.449199 kubelet[2949]: E0916 04:39:49.449073 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-59&limit=500&resourceVersion=0\": dial tcp 172.31.31.59:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:39:49.449252 kubelet[2949]: I0916 04:39:49.449215 2949 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:39:49.450377 kubelet[2949]: I0916 04:39:49.450335 2949 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:39:49.450595 kubelet[2949]: W0916 04:39:49.450564 2949 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 04:39:49.452457 kubelet[2949]: I0916 04:39:49.452426 2949 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:39:49.452696 kubelet[2949]: I0916 04:39:49.452674 2949 server.go:1287] "Started kubelet" Sep 16 04:39:49.453727 kubelet[2949]: I0916 04:39:49.453615 2949 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:39:49.462686 kubelet[2949]: I0916 04:39:49.462546 2949 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:39:49.470039 kubelet[2949]: I0916 04:39:49.469984 2949 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:39:49.470175 kubelet[2949]: I0916 04:39:49.467827 2949 server.go:479] "Adding debug handlers to kubelet server" Sep 16 04:39:49.472928 kubelet[2949]: I0916 04:39:49.472898 2949 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:39:49.473315 kubelet[2949]: E0916 04:39:49.472830 2949 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.59:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.59:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-59.1865a97e4b50daa0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-59,UID:ip-172-31-31-59,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-59,},FirstTimestamp:2025-09-16 04:39:49.452610208 +0000 UTC m=+1.349672071,LastTimestamp:2025-09-16 04:39:49.452610208 +0000 UTC m=+1.349672071,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-59,}" Sep 16 04:39:49.478881 kubelet[2949]: I0916 04:39:49.478836 2949 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:39:49.482280 kubelet[2949]: I0916 04:39:49.482241 2949 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:39:49.484515 kubelet[2949]: E0916 04:39:49.482997 2949 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-59\" not found" Sep 16 04:39:49.484515 kubelet[2949]: I0916 04:39:49.483875 2949 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:39:49.484515 kubelet[2949]: I0916 04:39:49.483973 2949 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:39:49.485393 kubelet[2949]: W0916 04:39:49.485325 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.59:6443: connect: connection refused Sep 16 04:39:49.485572 kubelet[2949]: E0916 04:39:49.485539 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.59:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:39:49.485853 kubelet[2949]: E0916 04:39:49.485808 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-59?timeout=10s\": dial tcp 172.31.31.59:6443: connect: connection refused" interval="200ms" Sep 16 04:39:49.486525 kubelet[2949]: I0916 04:39:49.486491 2949 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:39:49.486950 kubelet[2949]: I0916 04:39:49.486918 2949 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:39:49.489263 kubelet[2949]: I0916 04:39:49.489227 2949 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:39:49.507582 kubelet[2949]: I0916 04:39:49.507495 2949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:39:49.510794 kubelet[2949]: I0916 04:39:49.510733 2949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:39:49.510794 kubelet[2949]: I0916 04:39:49.510782 2949 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 16 04:39:49.510990 kubelet[2949]: I0916 04:39:49.510820 2949 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:39:49.510990 kubelet[2949]: I0916 04:39:49.510838 2949 kubelet.go:2382] "Starting kubelet main sync loop" Sep 16 04:39:49.510990 kubelet[2949]: E0916 04:39:49.510905 2949 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:39:49.519147 kubelet[2949]: E0916 04:39:49.519074 2949 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:39:49.519473 kubelet[2949]: W0916 04:39:49.519285 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.59:6443: connect: connection refused Sep 16 04:39:49.519473 kubelet[2949]: E0916 04:39:49.519373 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.59:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:39:49.536849 kubelet[2949]: I0916 04:39:49.536776 2949 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:39:49.537357 kubelet[2949]: I0916 04:39:49.537035 2949 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:39:49.537357 kubelet[2949]: I0916 04:39:49.537068 2949 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:39:49.544120 kubelet[2949]: I0916 04:39:49.544092 2949 policy_none.go:49] "None policy: Start" Sep 16 04:39:49.544280 kubelet[2949]: I0916 04:39:49.544262 2949 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:39:49.544765 kubelet[2949]: I0916 04:39:49.544366 2949 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:39:49.557398 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 04:39:49.578287 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 04:39:49.583682 kubelet[2949]: E0916 04:39:49.583613 2949 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-59\" not found" Sep 16 04:39:49.585817 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 04:39:49.589330 kubelet[2949]: E0916 04:39:49.588891 2949 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.59:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.59:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-59.1865a97e4b50daa0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-59,UID:ip-172-31-31-59,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-59,},FirstTimestamp:2025-09-16 04:39:49.452610208 +0000 UTC m=+1.349672071,LastTimestamp:2025-09-16 04:39:49.452610208 +0000 UTC m=+1.349672071,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-59,}" Sep 16 04:39:49.597748 kubelet[2949]: I0916 04:39:49.597299 2949 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:39:49.598855 kubelet[2949]: I0916 04:39:49.598756 2949 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:39:49.599079 kubelet[2949]: I0916 04:39:49.599016 2949 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:39:49.601901 kubelet[2949]: I0916 04:39:49.601850 2949 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:39:49.603764 kubelet[2949]: E0916 04:39:49.603486 2949 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:39:49.604280 kubelet[2949]: E0916 04:39:49.603735 2949 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-59\" not found" Sep 16 04:39:49.633784 systemd[1]: Created slice kubepods-burstable-pod94130aa057b7a1f8e0a5aa51b38b24ba.slice - libcontainer container kubepods-burstable-pod94130aa057b7a1f8e0a5aa51b38b24ba.slice. Sep 16 04:39:49.652672 kubelet[2949]: E0916 04:39:49.652619 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-59\" not found" node="ip-172-31-31-59" Sep 16 04:39:49.657296 systemd[1]: Created slice kubepods-burstable-pod6cd5d4b724fae8a5bf77c68ee41ab307.slice - libcontainer container kubepods-burstable-pod6cd5d4b724fae8a5bf77c68ee41ab307.slice. Sep 16 04:39:49.663455 kubelet[2949]: E0916 04:39:49.663421 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-59\" not found" node="ip-172-31-31-59" Sep 16 04:39:49.666871 systemd[1]: Created slice kubepods-burstable-pod61dfb68c5accdaf7fa97695aba1c2f5c.slice - libcontainer container kubepods-burstable-pod61dfb68c5accdaf7fa97695aba1c2f5c.slice. Sep 16 04:39:49.671145 kubelet[2949]: E0916 04:39:49.670804 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-59\" not found" node="ip-172-31-31-59" Sep 16 04:39:49.687514 kubelet[2949]: E0916 04:39:49.687470 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-59?timeout=10s\": dial tcp 172.31.31.59:6443: connect: connection refused" interval="400ms" Sep 16 04:39:49.703431 kubelet[2949]: I0916 04:39:49.703398 2949 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-59" Sep 16 04:39:49.704297 kubelet[2949]: E0916 04:39:49.704257 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.59:6443/api/v1/nodes\": dial tcp 172.31.31.59:6443: connect: connection refused" node="ip-172-31-31-59" Sep 16 04:39:49.785801 kubelet[2949]: I0916 04:39:49.785720 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/94130aa057b7a1f8e0a5aa51b38b24ba-ca-certs\") pod \"kube-apiserver-ip-172-31-31-59\" (UID: \"94130aa057b7a1f8e0a5aa51b38b24ba\") " pod="kube-system/kube-apiserver-ip-172-31-31-59" Sep 16 04:39:49.785919 kubelet[2949]: I0916 04:39:49.785802 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61dfb68c5accdaf7fa97695aba1c2f5c-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-59\" (UID: \"61dfb68c5accdaf7fa97695aba1c2f5c\") " pod="kube-system/kube-scheduler-ip-172-31-31-59" Sep 16 04:39:49.785919 kubelet[2949]: I0916 04:39:49.785867 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cd5d4b724fae8a5bf77c68ee41ab307-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-59\" (UID: \"6cd5d4b724fae8a5bf77c68ee41ab307\") " pod="kube-system/kube-controller-manager-ip-172-31-31-59" Sep 16 04:39:49.786024 kubelet[2949]: I0916 04:39:49.785909 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6cd5d4b724fae8a5bf77c68ee41ab307-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-59\" (UID: \"6cd5d4b724fae8a5bf77c68ee41ab307\") " pod="kube-system/kube-controller-manager-ip-172-31-31-59" Sep 16 04:39:49.786078 kubelet[2949]: I0916 04:39:49.785987 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cd5d4b724fae8a5bf77c68ee41ab307-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-59\" (UID: \"6cd5d4b724fae8a5bf77c68ee41ab307\") " pod="kube-system/kube-controller-manager-ip-172-31-31-59" Sep 16 04:39:49.786137 kubelet[2949]: I0916 04:39:49.786108 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/94130aa057b7a1f8e0a5aa51b38b24ba-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-59\" (UID: \"94130aa057b7a1f8e0a5aa51b38b24ba\") " pod="kube-system/kube-apiserver-ip-172-31-31-59" Sep 16 04:39:49.786197 kubelet[2949]: I0916 04:39:49.786163 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/94130aa057b7a1f8e0a5aa51b38b24ba-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-59\" (UID: \"94130aa057b7a1f8e0a5aa51b38b24ba\") " pod="kube-system/kube-apiserver-ip-172-31-31-59" Sep 16 04:39:49.786245 kubelet[2949]: I0916 04:39:49.786224 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cd5d4b724fae8a5bf77c68ee41ab307-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-59\" (UID: \"6cd5d4b724fae8a5bf77c68ee41ab307\") " pod="kube-system/kube-controller-manager-ip-172-31-31-59" Sep 16 04:39:49.786432 kubelet[2949]: I0916 04:39:49.786311 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6cd5d4b724fae8a5bf77c68ee41ab307-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-59\" (UID: \"6cd5d4b724fae8a5bf77c68ee41ab307\") " pod="kube-system/kube-controller-manager-ip-172-31-31-59" Sep 16 04:39:49.906819 kubelet[2949]: I0916 04:39:49.906608 2949 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-59" Sep 16 04:39:49.907794 kubelet[2949]: E0916 04:39:49.907741 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.59:6443/api/v1/nodes\": dial tcp 172.31.31.59:6443: connect: connection refused" node="ip-172-31-31-59" Sep 16 04:39:49.955419 containerd[2009]: time="2025-09-16T04:39:49.955066723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-59,Uid:94130aa057b7a1f8e0a5aa51b38b24ba,Namespace:kube-system,Attempt:0,}" Sep 16 04:39:49.966762 containerd[2009]: time="2025-09-16T04:39:49.966439063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-59,Uid:6cd5d4b724fae8a5bf77c68ee41ab307,Namespace:kube-system,Attempt:0,}" Sep 16 04:39:49.972949 containerd[2009]: time="2025-09-16T04:39:49.972877483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-59,Uid:61dfb68c5accdaf7fa97695aba1c2f5c,Namespace:kube-system,Attempt:0,}" Sep 16 04:39:50.026520 containerd[2009]: time="2025-09-16T04:39:50.026321403Z" level=info msg="connecting to shim 79afeb6907bc9145073a5192a32adfecf1418cd7cde927428739345275f6b3a1" address="unix:///run/containerd/s/7d351e61de346a753620374536ef31d01efaa86c5b62ea98168dbf0d1042077f" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:39:50.073844 containerd[2009]: time="2025-09-16T04:39:50.073753612Z" level=info msg="connecting to shim 5d3ec0104e553dc3af2c8ee58c8f8fed8ddd5aef8c4869496797f92851a17c7f" address="unix:///run/containerd/s/50e3e796e651dc3b13415594424c3f119f157f7751239b4b8000eb1583810909" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:39:50.084132 containerd[2009]: time="2025-09-16T04:39:50.084040780Z" level=info msg="connecting to shim fe7123f44c6c1253e12f9a6f6057c44a6af34cbe74e8f63d79a80922eef9e44c" address="unix:///run/containerd/s/3a70ac2c41d1d8f52c9c1f0b030286871006d5686549b1d2e3ae422eaee36622" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:39:50.088598 kubelet[2949]: E0916 04:39:50.088487 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-59?timeout=10s\": dial tcp 172.31.31.59:6443: connect: connection refused" interval="800ms" Sep 16 04:39:50.127982 systemd[1]: Started cri-containerd-79afeb6907bc9145073a5192a32adfecf1418cd7cde927428739345275f6b3a1.scope - libcontainer container 79afeb6907bc9145073a5192a32adfecf1418cd7cde927428739345275f6b3a1. Sep 16 04:39:50.161935 systemd[1]: Started cri-containerd-5d3ec0104e553dc3af2c8ee58c8f8fed8ddd5aef8c4869496797f92851a17c7f.scope - libcontainer container 5d3ec0104e553dc3af2c8ee58c8f8fed8ddd5aef8c4869496797f92851a17c7f. Sep 16 04:39:50.175719 systemd[1]: Started cri-containerd-fe7123f44c6c1253e12f9a6f6057c44a6af34cbe74e8f63d79a80922eef9e44c.scope - libcontainer container fe7123f44c6c1253e12f9a6f6057c44a6af34cbe74e8f63d79a80922eef9e44c. Sep 16 04:39:50.277320 kubelet[2949]: W0916 04:39:50.277243 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.59:6443: connect: connection refused Sep 16 04:39:50.277856 kubelet[2949]: E0916 04:39:50.277813 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.59:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:39:50.291657 containerd[2009]: time="2025-09-16T04:39:50.291546437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-59,Uid:94130aa057b7a1f8e0a5aa51b38b24ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"79afeb6907bc9145073a5192a32adfecf1418cd7cde927428739345275f6b3a1\"" Sep 16 04:39:50.300753 containerd[2009]: time="2025-09-16T04:39:50.300560225Z" level=info msg="CreateContainer within sandbox \"79afeb6907bc9145073a5192a32adfecf1418cd7cde927428739345275f6b3a1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 04:39:50.313989 containerd[2009]: time="2025-09-16T04:39:50.313933733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-59,Uid:6cd5d4b724fae8a5bf77c68ee41ab307,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe7123f44c6c1253e12f9a6f6057c44a6af34cbe74e8f63d79a80922eef9e44c\"" Sep 16 04:39:50.316901 kubelet[2949]: I0916 04:39:50.316848 2949 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-59" Sep 16 04:39:50.317696 kubelet[2949]: E0916 04:39:50.317286 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.59:6443/api/v1/nodes\": dial tcp 172.31.31.59:6443: connect: connection refused" node="ip-172-31-31-59" Sep 16 04:39:50.326316 containerd[2009]: time="2025-09-16T04:39:50.326245577Z" level=info msg="CreateContainer within sandbox \"fe7123f44c6c1253e12f9a6f6057c44a6af34cbe74e8f63d79a80922eef9e44c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 04:39:50.343786 containerd[2009]: time="2025-09-16T04:39:50.343735829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-59,Uid:61dfb68c5accdaf7fa97695aba1c2f5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d3ec0104e553dc3af2c8ee58c8f8fed8ddd5aef8c4869496797f92851a17c7f\"" Sep 16 04:39:50.349191 containerd[2009]: time="2025-09-16T04:39:50.349105637Z" level=info msg="CreateContainer within sandbox \"5d3ec0104e553dc3af2c8ee58c8f8fed8ddd5aef8c4869496797f92851a17c7f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 04:39:50.352361 containerd[2009]: time="2025-09-16T04:39:50.352295825Z" level=info msg="Container ee278be052d4cb2c3f58cca1d79293962f3b86801c0dd2512b5f2a560b8627c3: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:39:50.357374 containerd[2009]: time="2025-09-16T04:39:50.357258245Z" level=info msg="Container a56f5a44551c4cbc40928c16eee1b34c182fd1642d0aceb42f686ef7758806c9: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:39:50.370022 containerd[2009]: time="2025-09-16T04:39:50.369960881Z" level=info msg="CreateContainer within sandbox \"79afeb6907bc9145073a5192a32adfecf1418cd7cde927428739345275f6b3a1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ee278be052d4cb2c3f58cca1d79293962f3b86801c0dd2512b5f2a560b8627c3\"" Sep 16 04:39:50.371385 containerd[2009]: time="2025-09-16T04:39:50.371331221Z" level=info msg="StartContainer for \"ee278be052d4cb2c3f58cca1d79293962f3b86801c0dd2512b5f2a560b8627c3\"" Sep 16 04:39:50.373715 containerd[2009]: time="2025-09-16T04:39:50.373656125Z" level=info msg="connecting to shim ee278be052d4cb2c3f58cca1d79293962f3b86801c0dd2512b5f2a560b8627c3" address="unix:///run/containerd/s/7d351e61de346a753620374536ef31d01efaa86c5b62ea98168dbf0d1042077f" protocol=ttrpc version=3 Sep 16 04:39:50.384586 containerd[2009]: time="2025-09-16T04:39:50.384519593Z" level=info msg="CreateContainer within sandbox \"fe7123f44c6c1253e12f9a6f6057c44a6af34cbe74e8f63d79a80922eef9e44c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a56f5a44551c4cbc40928c16eee1b34c182fd1642d0aceb42f686ef7758806c9\"" Sep 16 04:39:50.385602 containerd[2009]: time="2025-09-16T04:39:50.385532849Z" level=info msg="StartContainer for \"a56f5a44551c4cbc40928c16eee1b34c182fd1642d0aceb42f686ef7758806c9\"" Sep 16 04:39:50.388187 containerd[2009]: time="2025-09-16T04:39:50.388130501Z" level=info msg="Container 31af12b3a9a618e7ae12a39268d56c3b80e45093d1f035efd4d1462275bf7590: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:39:50.392979 containerd[2009]: time="2025-09-16T04:39:50.392900825Z" level=info msg="connecting to shim a56f5a44551c4cbc40928c16eee1b34c182fd1642d0aceb42f686ef7758806c9" address="unix:///run/containerd/s/3a70ac2c41d1d8f52c9c1f0b030286871006d5686549b1d2e3ae422eaee36622" protocol=ttrpc version=3 Sep 16 04:39:50.410511 containerd[2009]: time="2025-09-16T04:39:50.410453261Z" level=info msg="CreateContainer within sandbox \"5d3ec0104e553dc3af2c8ee58c8f8fed8ddd5aef8c4869496797f92851a17c7f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"31af12b3a9a618e7ae12a39268d56c3b80e45093d1f035efd4d1462275bf7590\"" Sep 16 04:39:50.414458 containerd[2009]: time="2025-09-16T04:39:50.414320633Z" level=info msg="StartContainer for \"31af12b3a9a618e7ae12a39268d56c3b80e45093d1f035efd4d1462275bf7590\"" Sep 16 04:39:50.418032 systemd[1]: Started cri-containerd-ee278be052d4cb2c3f58cca1d79293962f3b86801c0dd2512b5f2a560b8627c3.scope - libcontainer container ee278be052d4cb2c3f58cca1d79293962f3b86801c0dd2512b5f2a560b8627c3. Sep 16 04:39:50.418980 containerd[2009]: time="2025-09-16T04:39:50.417899321Z" level=info msg="connecting to shim 31af12b3a9a618e7ae12a39268d56c3b80e45093d1f035efd4d1462275bf7590" address="unix:///run/containerd/s/50e3e796e651dc3b13415594424c3f119f157f7751239b4b8000eb1583810909" protocol=ttrpc version=3 Sep 16 04:39:50.457942 systemd[1]: Started cri-containerd-a56f5a44551c4cbc40928c16eee1b34c182fd1642d0aceb42f686ef7758806c9.scope - libcontainer container a56f5a44551c4cbc40928c16eee1b34c182fd1642d0aceb42f686ef7758806c9. Sep 16 04:39:50.489956 systemd[1]: Started cri-containerd-31af12b3a9a618e7ae12a39268d56c3b80e45093d1f035efd4d1462275bf7590.scope - libcontainer container 31af12b3a9a618e7ae12a39268d56c3b80e45093d1f035efd4d1462275bf7590. Sep 16 04:39:50.585281 kubelet[2949]: W0916 04:39:50.585190 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-59&limit=500&resourceVersion=0": dial tcp 172.31.31.59:6443: connect: connection refused Sep 16 04:39:50.585833 kubelet[2949]: E0916 04:39:50.585293 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-59&limit=500&resourceVersion=0\": dial tcp 172.31.31.59:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:39:50.607836 containerd[2009]: time="2025-09-16T04:39:50.607764690Z" level=info msg="StartContainer for \"ee278be052d4cb2c3f58cca1d79293962f3b86801c0dd2512b5f2a560b8627c3\" returns successfully" Sep 16 04:39:50.644969 containerd[2009]: time="2025-09-16T04:39:50.644692446Z" level=info msg="StartContainer for \"a56f5a44551c4cbc40928c16eee1b34c182fd1642d0aceb42f686ef7758806c9\" returns successfully" Sep 16 04:39:50.691279 containerd[2009]: time="2025-09-16T04:39:50.691071055Z" level=info msg="StartContainer for \"31af12b3a9a618e7ae12a39268d56c3b80e45093d1f035efd4d1462275bf7590\" returns successfully" Sep 16 04:39:50.770125 kubelet[2949]: W0916 04:39:50.769958 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.59:6443: connect: connection refused Sep 16 04:39:50.770125 kubelet[2949]: E0916 04:39:50.770054 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.59:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:39:51.121027 kubelet[2949]: I0916 04:39:51.120903 2949 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-59" Sep 16 04:39:51.577488 kubelet[2949]: E0916 04:39:51.577210 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-59\" not found" node="ip-172-31-31-59" Sep 16 04:39:51.584816 kubelet[2949]: E0916 04:39:51.584522 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-59\" not found" node="ip-172-31-31-59" Sep 16 04:39:51.593307 kubelet[2949]: E0916 04:39:51.593046 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-59\" not found" node="ip-172-31-31-59" Sep 16 04:39:52.595989 kubelet[2949]: E0916 04:39:52.595918 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-59\" not found" node="ip-172-31-31-59" Sep 16 04:39:52.598188 kubelet[2949]: E0916 04:39:52.596331 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-59\" not found" node="ip-172-31-31-59" Sep 16 04:39:52.598188 kubelet[2949]: E0916 04:39:52.596681 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-59\" not found" node="ip-172-31-31-59" Sep 16 04:39:53.597919 kubelet[2949]: E0916 04:39:53.597832 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-59\" not found" node="ip-172-31-31-59" Sep 16 04:39:53.600266 kubelet[2949]: E0916 04:39:53.600219 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-59\" not found" node="ip-172-31-31-59" Sep 16 04:39:54.001182 kubelet[2949]: E0916 04:39:54.000979 2949 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-59\" not found" node="ip-172-31-31-59" Sep 16 04:39:54.116928 kubelet[2949]: I0916 04:39:54.116859 2949 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-59" Sep 16 04:39:54.116928 kubelet[2949]: E0916 04:39:54.116924 2949 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-31-59\": node \"ip-172-31-31-59\" not found" Sep 16 04:39:54.184935 kubelet[2949]: I0916 04:39:54.183921 2949 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-59" Sep 16 04:39:54.227695 kubelet[2949]: E0916 04:39:54.227402 2949 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-59\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-59" Sep 16 04:39:54.227695 kubelet[2949]: I0916 04:39:54.227448 2949 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-59" Sep 16 04:39:54.258684 kubelet[2949]: E0916 04:39:54.258519 2949 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-59\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-31-59" Sep 16 04:39:54.258684 kubelet[2949]: I0916 04:39:54.258568 2949 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-59" Sep 16 04:39:54.279832 kubelet[2949]: E0916 04:39:54.279776 2949 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-59\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-31-59" Sep 16 04:39:54.445675 kubelet[2949]: I0916 04:39:54.444201 2949 apiserver.go:52] "Watching apiserver" Sep 16 04:39:54.484235 kubelet[2949]: I0916 04:39:54.484183 2949 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:39:54.979671 kubelet[2949]: I0916 04:39:54.979617 2949 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-59" Sep 16 04:39:56.208944 kubelet[2949]: I0916 04:39:56.208891 2949 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-59" Sep 16 04:39:56.578746 systemd[1]: Reload requested from client PID 3226 ('systemctl') (unit session-7.scope)... Sep 16 04:39:56.578776 systemd[1]: Reloading... Sep 16 04:39:56.773704 zram_generator::config[3273]: No configuration found. Sep 16 04:39:57.257371 systemd[1]: Reloading finished in 677 ms. Sep 16 04:39:57.310193 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:39:57.329382 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:39:57.329919 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:39:57.330014 systemd[1]: kubelet.service: Consumed 2.053s CPU time, 128.1M memory peak. Sep 16 04:39:57.333967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:39:57.697554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:39:57.716972 (kubelet)[3330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:39:57.816297 kubelet[3330]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:39:57.816297 kubelet[3330]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:39:57.816297 kubelet[3330]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:39:57.816851 kubelet[3330]: I0916 04:39:57.816410 3330 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:39:57.829677 kubelet[3330]: I0916 04:39:57.829258 3330 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 16 04:39:57.829677 kubelet[3330]: I0916 04:39:57.829306 3330 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:39:57.830019 kubelet[3330]: I0916 04:39:57.829994 3330 server.go:954] "Client rotation is on, will bootstrap in background" Sep 16 04:39:57.838854 kubelet[3330]: I0916 04:39:57.838808 3330 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 16 04:39:57.847068 kubelet[3330]: I0916 04:39:57.846311 3330 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:39:57.854843 kubelet[3330]: I0916 04:39:57.854567 3330 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:39:57.860985 kubelet[3330]: I0916 04:39:57.860950 3330 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:39:57.861608 kubelet[3330]: I0916 04:39:57.861564 3330 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:39:57.863248 kubelet[3330]: I0916 04:39:57.862822 3330 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-59","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:39:57.863462 kubelet[3330]: I0916 04:39:57.863442 3330 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:39:57.863554 kubelet[3330]: I0916 04:39:57.863537 3330 container_manager_linux.go:304] "Creating device plugin manager" Sep 16 04:39:57.863755 kubelet[3330]: I0916 04:39:57.863735 3330 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:39:57.865114 kubelet[3330]: I0916 04:39:57.864929 3330 kubelet.go:446] "Attempting to sync node with API server" Sep 16 04:39:57.865114 kubelet[3330]: I0916 04:39:57.864953 3330 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:39:57.865114 kubelet[3330]: I0916 04:39:57.864996 3330 kubelet.go:352] "Adding apiserver pod source" Sep 16 04:39:57.865114 kubelet[3330]: I0916 04:39:57.865023 3330 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:39:57.871673 kubelet[3330]: I0916 04:39:57.869556 3330 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:39:57.872047 kubelet[3330]: I0916 04:39:57.871890 3330 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:39:57.874701 kubelet[3330]: I0916 04:39:57.873835 3330 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:39:57.874701 kubelet[3330]: I0916 04:39:57.873897 3330 server.go:1287] "Started kubelet" Sep 16 04:39:57.880196 kubelet[3330]: I0916 04:39:57.880145 3330 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:39:57.884250 kubelet[3330]: I0916 04:39:57.884161 3330 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:39:57.895703 kubelet[3330]: I0916 04:39:57.894802 3330 server.go:479] "Adding debug handlers to kubelet server" Sep 16 04:39:57.902671 kubelet[3330]: I0916 04:39:57.901032 3330 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:39:57.904037 kubelet[3330]: I0916 04:39:57.904001 3330 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:39:57.907837 kubelet[3330]: I0916 04:39:57.907793 3330 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:39:57.915300 kubelet[3330]: I0916 04:39:57.915247 3330 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:39:57.917751 kubelet[3330]: E0916 04:39:57.916680 3330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-59\" not found" Sep 16 04:39:57.919908 kubelet[3330]: I0916 04:39:57.917622 3330 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:39:57.920435 kubelet[3330]: I0916 04:39:57.917966 3330 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:39:57.935538 kubelet[3330]: I0916 04:39:57.934566 3330 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:39:57.938308 kubelet[3330]: I0916 04:39:57.936201 3330 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:39:57.944991 kubelet[3330]: I0916 04:39:57.944942 3330 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:39:57.945998 kubelet[3330]: E0916 04:39:57.945952 3330 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:39:57.967336 kubelet[3330]: I0916 04:39:57.967159 3330 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:39:57.970457 kubelet[3330]: I0916 04:39:57.970402 3330 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:39:57.970457 kubelet[3330]: I0916 04:39:57.970452 3330 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 16 04:39:57.970676 kubelet[3330]: I0916 04:39:57.970487 3330 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:39:57.970676 kubelet[3330]: I0916 04:39:57.970503 3330 kubelet.go:2382] "Starting kubelet main sync loop" Sep 16 04:39:57.970676 kubelet[3330]: E0916 04:39:57.970593 3330 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:39:58.072675 kubelet[3330]: E0916 04:39:58.071406 3330 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 16 04:39:58.092810 kubelet[3330]: I0916 04:39:58.092753 3330 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:39:58.094426 kubelet[3330]: I0916 04:39:58.092790 3330 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:39:58.094513 kubelet[3330]: I0916 04:39:58.094464 3330 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:39:58.096239 kubelet[3330]: I0916 04:39:58.094960 3330 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 04:39:58.096239 kubelet[3330]: I0916 04:39:58.094988 3330 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 04:39:58.096239 kubelet[3330]: I0916 04:39:58.095022 3330 policy_none.go:49] "None policy: Start" Sep 16 04:39:58.096239 kubelet[3330]: I0916 04:39:58.095040 3330 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:39:58.096239 kubelet[3330]: I0916 04:39:58.095062 3330 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:39:58.096239 kubelet[3330]: I0916 04:39:58.095280 3330 state_mem.go:75] "Updated machine memory state" Sep 16 04:39:58.105420 kubelet[3330]: I0916 04:39:58.105387 3330 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:39:58.105882 kubelet[3330]: I0916 04:39:58.105860 3330 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:39:58.106026 kubelet[3330]: I0916 04:39:58.105979 3330 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:39:58.107411 kubelet[3330]: I0916 04:39:58.107383 3330 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:39:58.113148 kubelet[3330]: E0916 04:39:58.113112 3330 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:39:58.229065 kubelet[3330]: I0916 04:39:58.228947 3330 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-59" Sep 16 04:39:58.245303 kubelet[3330]: I0916 04:39:58.245088 3330 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-31-59" Sep 16 04:39:58.246311 kubelet[3330]: I0916 04:39:58.246277 3330 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-59" Sep 16 04:39:58.272721 kubelet[3330]: I0916 04:39:58.272517 3330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-59" Sep 16 04:39:58.273582 kubelet[3330]: I0916 04:39:58.273526 3330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-59" Sep 16 04:39:58.275784 kubelet[3330]: I0916 04:39:58.275713 3330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-59" Sep 16 04:39:58.292407 kubelet[3330]: E0916 04:39:58.292348 3330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-59\" already exists" pod="kube-system/kube-scheduler-ip-172-31-31-59" Sep 16 04:39:58.295343 kubelet[3330]: E0916 04:39:58.295290 3330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-59\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-59" Sep 16 04:39:58.325419 kubelet[3330]: I0916 04:39:58.325021 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/94130aa057b7a1f8e0a5aa51b38b24ba-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-59\" (UID: \"94130aa057b7a1f8e0a5aa51b38b24ba\") " pod="kube-system/kube-apiserver-ip-172-31-31-59" Sep 16 04:39:58.326663 kubelet[3330]: I0916 04:39:58.326394 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cd5d4b724fae8a5bf77c68ee41ab307-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-59\" (UID: \"6cd5d4b724fae8a5bf77c68ee41ab307\") " pod="kube-system/kube-controller-manager-ip-172-31-31-59" Sep 16 04:39:58.328840 kubelet[3330]: I0916 04:39:58.327050 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6cd5d4b724fae8a5bf77c68ee41ab307-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-59\" (UID: \"6cd5d4b724fae8a5bf77c68ee41ab307\") " pod="kube-system/kube-controller-manager-ip-172-31-31-59" Sep 16 04:39:58.328840 kubelet[3330]: I0916 04:39:58.327676 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cd5d4b724fae8a5bf77c68ee41ab307-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-59\" (UID: \"6cd5d4b724fae8a5bf77c68ee41ab307\") " pod="kube-system/kube-controller-manager-ip-172-31-31-59" Sep 16 04:39:58.328840 kubelet[3330]: I0916 04:39:58.327723 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/94130aa057b7a1f8e0a5aa51b38b24ba-ca-certs\") pod \"kube-apiserver-ip-172-31-31-59\" (UID: \"94130aa057b7a1f8e0a5aa51b38b24ba\") " pod="kube-system/kube-apiserver-ip-172-31-31-59" Sep 16 04:39:58.328840 kubelet[3330]: I0916 04:39:58.327763 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/94130aa057b7a1f8e0a5aa51b38b24ba-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-59\" (UID: \"94130aa057b7a1f8e0a5aa51b38b24ba\") " pod="kube-system/kube-apiserver-ip-172-31-31-59" Sep 16 04:39:58.328840 kubelet[3330]: I0916 04:39:58.327801 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6cd5d4b724fae8a5bf77c68ee41ab307-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-59\" (UID: \"6cd5d4b724fae8a5bf77c68ee41ab307\") " pod="kube-system/kube-controller-manager-ip-172-31-31-59" Sep 16 04:39:58.329273 kubelet[3330]: I0916 04:39:58.327839 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cd5d4b724fae8a5bf77c68ee41ab307-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-59\" (UID: \"6cd5d4b724fae8a5bf77c68ee41ab307\") " pod="kube-system/kube-controller-manager-ip-172-31-31-59" Sep 16 04:39:58.329273 kubelet[3330]: I0916 04:39:58.327878 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61dfb68c5accdaf7fa97695aba1c2f5c-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-59\" (UID: \"61dfb68c5accdaf7fa97695aba1c2f5c\") " pod="kube-system/kube-scheduler-ip-172-31-31-59" Sep 16 04:39:58.881280 kubelet[3330]: I0916 04:39:58.880864 3330 apiserver.go:52] "Watching apiserver" Sep 16 04:39:58.921671 kubelet[3330]: I0916 04:39:58.920811 3330 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:39:59.057329 kubelet[3330]: I0916 04:39:59.054331 3330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-59" Sep 16 04:39:59.069677 kubelet[3330]: E0916 04:39:59.067876 3330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-59\" already exists" pod="kube-system/kube-scheduler-ip-172-31-31-59" Sep 16 04:39:59.096992 kubelet[3330]: I0916 04:39:59.096829 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-59" podStartSLOduration=5.096732948 podStartE2EDuration="5.096732948s" podCreationTimestamp="2025-09-16 04:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:39:59.09481644 +0000 UTC m=+1.369206223" watchObservedRunningTime="2025-09-16 04:39:59.096732948 +0000 UTC m=+1.371122719" Sep 16 04:39:59.138056 kubelet[3330]: I0916 04:39:59.137281 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-59" podStartSLOduration=1.137259337 podStartE2EDuration="1.137259337s" podCreationTimestamp="2025-09-16 04:39:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:39:59.136457173 +0000 UTC m=+1.410846956" watchObservedRunningTime="2025-09-16 04:39:59.137259337 +0000 UTC m=+1.411649120" Sep 16 04:39:59.138723 kubelet[3330]: I0916 04:39:59.138446 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-59" podStartSLOduration=3.138423217 podStartE2EDuration="3.138423217s" podCreationTimestamp="2025-09-16 04:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:39:59.116406972 +0000 UTC m=+1.390796767" watchObservedRunningTime="2025-09-16 04:39:59.138423217 +0000 UTC m=+1.412813084" Sep 16 04:40:01.734412 update_engine[1983]: I20250916 04:40:01.734315 1983 update_attempter.cc:509] Updating boot flags... Sep 16 04:40:02.923067 kubelet[3330]: I0916 04:40:02.922985 3330 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 04:40:02.925280 containerd[2009]: time="2025-09-16T04:40:02.925236391Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 04:40:02.926367 kubelet[3330]: I0916 04:40:02.926323 3330 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 04:40:03.500004 systemd[1]: Created slice kubepods-besteffort-pod0b6570c4_5136_47b2_9929_fabd731cdecc.slice - libcontainer container kubepods-besteffort-pod0b6570c4_5136_47b2_9929_fabd731cdecc.slice. Sep 16 04:40:03.565560 kubelet[3330]: I0916 04:40:03.565510 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjlrc\" (UniqueName: \"kubernetes.io/projected/0b6570c4-5136-47b2-9929-fabd731cdecc-kube-api-access-vjlrc\") pod \"kube-proxy-t7m5t\" (UID: \"0b6570c4-5136-47b2-9929-fabd731cdecc\") " pod="kube-system/kube-proxy-t7m5t" Sep 16 04:40:03.566001 kubelet[3330]: I0916 04:40:03.565929 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b6570c4-5136-47b2-9929-fabd731cdecc-kube-proxy\") pod \"kube-proxy-t7m5t\" (UID: \"0b6570c4-5136-47b2-9929-fabd731cdecc\") " pod="kube-system/kube-proxy-t7m5t" Sep 16 04:40:03.566146 kubelet[3330]: I0916 04:40:03.566120 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b6570c4-5136-47b2-9929-fabd731cdecc-xtables-lock\") pod \"kube-proxy-t7m5t\" (UID: \"0b6570c4-5136-47b2-9929-fabd731cdecc\") " pod="kube-system/kube-proxy-t7m5t" Sep 16 04:40:03.566324 kubelet[3330]: I0916 04:40:03.566296 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b6570c4-5136-47b2-9929-fabd731cdecc-lib-modules\") pod \"kube-proxy-t7m5t\" (UID: \"0b6570c4-5136-47b2-9929-fabd731cdecc\") " pod="kube-system/kube-proxy-t7m5t" Sep 16 04:40:03.816439 containerd[2009]: time="2025-09-16T04:40:03.816288788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t7m5t,Uid:0b6570c4-5136-47b2-9929-fabd731cdecc,Namespace:kube-system,Attempt:0,}" Sep 16 04:40:03.846261 containerd[2009]: time="2025-09-16T04:40:03.846181268Z" level=info msg="connecting to shim 01823f78be2686fefaca7ff93886308f883274bede9156b2b6e001a76b947711" address="unix:///run/containerd/s/80710dd81fb53ac4297f54e007ff8b4c13119c6791ca95fc2d40cb963f7b6c69" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:40:03.902994 systemd[1]: Started cri-containerd-01823f78be2686fefaca7ff93886308f883274bede9156b2b6e001a76b947711.scope - libcontainer container 01823f78be2686fefaca7ff93886308f883274bede9156b2b6e001a76b947711. Sep 16 04:40:03.993659 containerd[2009]: time="2025-09-16T04:40:03.993177165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t7m5t,Uid:0b6570c4-5136-47b2-9929-fabd731cdecc,Namespace:kube-system,Attempt:0,} returns sandbox id \"01823f78be2686fefaca7ff93886308f883274bede9156b2b6e001a76b947711\"" Sep 16 04:40:04.008661 containerd[2009]: time="2025-09-16T04:40:04.007796753Z" level=info msg="CreateContainer within sandbox \"01823f78be2686fefaca7ff93886308f883274bede9156b2b6e001a76b947711\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 04:40:04.040665 containerd[2009]: time="2025-09-16T04:40:04.039468017Z" level=info msg="Container 3483a3595ab10b839045c95fc83efe8548bfcdeb968b8da980e57964e86a7a37: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:40:04.068798 containerd[2009]: time="2025-09-16T04:40:04.068613197Z" level=info msg="CreateContainer within sandbox \"01823f78be2686fefaca7ff93886308f883274bede9156b2b6e001a76b947711\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3483a3595ab10b839045c95fc83efe8548bfcdeb968b8da980e57964e86a7a37\"" Sep 16 04:40:04.070994 containerd[2009]: time="2025-09-16T04:40:04.070951037Z" level=info msg="StartContainer for \"3483a3595ab10b839045c95fc83efe8548bfcdeb968b8da980e57964e86a7a37\"" Sep 16 04:40:04.079769 containerd[2009]: time="2025-09-16T04:40:04.079618313Z" level=info msg="connecting to shim 3483a3595ab10b839045c95fc83efe8548bfcdeb968b8da980e57964e86a7a37" address="unix:///run/containerd/s/80710dd81fb53ac4297f54e007ff8b4c13119c6791ca95fc2d40cb963f7b6c69" protocol=ttrpc version=3 Sep 16 04:40:04.136348 systemd[1]: Started cri-containerd-3483a3595ab10b839045c95fc83efe8548bfcdeb968b8da980e57964e86a7a37.scope - libcontainer container 3483a3595ab10b839045c95fc83efe8548bfcdeb968b8da980e57964e86a7a37. Sep 16 04:40:04.176038 systemd[1]: Created slice kubepods-besteffort-pod5208b3f8_ecab_4c7f_9ab3_5d83f5d4442b.slice - libcontainer container kubepods-besteffort-pod5208b3f8_ecab_4c7f_9ab3_5d83f5d4442b.slice. Sep 16 04:40:04.275753 kubelet[3330]: I0916 04:40:04.275598 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njxsv\" (UniqueName: \"kubernetes.io/projected/5208b3f8-ecab-4c7f-9ab3-5d83f5d4442b-kube-api-access-njxsv\") pod \"tigera-operator-755d956888-4dmmd\" (UID: \"5208b3f8-ecab-4c7f-9ab3-5d83f5d4442b\") " pod="tigera-operator/tigera-operator-755d956888-4dmmd" Sep 16 04:40:04.277972 kubelet[3330]: I0916 04:40:04.277778 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5208b3f8-ecab-4c7f-9ab3-5d83f5d4442b-var-lib-calico\") pod \"tigera-operator-755d956888-4dmmd\" (UID: \"5208b3f8-ecab-4c7f-9ab3-5d83f5d4442b\") " pod="tigera-operator/tigera-operator-755d956888-4dmmd" Sep 16 04:40:04.282856 containerd[2009]: time="2025-09-16T04:40:04.282810030Z" level=info msg="StartContainer for \"3483a3595ab10b839045c95fc83efe8548bfcdeb968b8da980e57964e86a7a37\" returns successfully" Sep 16 04:40:04.486494 containerd[2009]: time="2025-09-16T04:40:04.485933983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-4dmmd,Uid:5208b3f8-ecab-4c7f-9ab3-5d83f5d4442b,Namespace:tigera-operator,Attempt:0,}" Sep 16 04:40:04.529724 containerd[2009]: time="2025-09-16T04:40:04.529596871Z" level=info msg="connecting to shim 2693ff28a22312e26025fe51a05d68d2baa4377c19b15b866e9cf3026f95d2bb" address="unix:///run/containerd/s/12a347be6869ceb08a5fad90f0cc2133e564f67f89625723baa9022c6cce0f01" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:40:04.586121 systemd[1]: Started cri-containerd-2693ff28a22312e26025fe51a05d68d2baa4377c19b15b866e9cf3026f95d2bb.scope - libcontainer container 2693ff28a22312e26025fe51a05d68d2baa4377c19b15b866e9cf3026f95d2bb. Sep 16 04:40:04.682414 containerd[2009]: time="2025-09-16T04:40:04.682358072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-4dmmd,Uid:5208b3f8-ecab-4c7f-9ab3-5d83f5d4442b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2693ff28a22312e26025fe51a05d68d2baa4377c19b15b866e9cf3026f95d2bb\"" Sep 16 04:40:04.686002 containerd[2009]: time="2025-09-16T04:40:04.685950044Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 16 04:40:05.108372 kubelet[3330]: I0916 04:40:05.108250 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t7m5t" podStartSLOduration=2.108228246 podStartE2EDuration="2.108228246s" podCreationTimestamp="2025-09-16 04:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:40:05.107253402 +0000 UTC m=+7.381643197" watchObservedRunningTime="2025-09-16 04:40:05.108228246 +0000 UTC m=+7.382618041" Sep 16 04:40:06.203741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3109985249.mount: Deactivated successfully. Sep 16 04:40:07.187434 containerd[2009]: time="2025-09-16T04:40:07.187328889Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:07.189518 containerd[2009]: time="2025-09-16T04:40:07.189141393Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 16 04:40:07.190704 containerd[2009]: time="2025-09-16T04:40:07.190623381Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:07.194348 containerd[2009]: time="2025-09-16T04:40:07.194287893Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:07.195889 containerd[2009]: time="2025-09-16T04:40:07.195845109Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.509811701s" Sep 16 04:40:07.196094 containerd[2009]: time="2025-09-16T04:40:07.196021365Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 16 04:40:07.202583 containerd[2009]: time="2025-09-16T04:40:07.202370289Z" level=info msg="CreateContainer within sandbox \"2693ff28a22312e26025fe51a05d68d2baa4377c19b15b866e9cf3026f95d2bb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 16 04:40:07.214605 containerd[2009]: time="2025-09-16T04:40:07.214540065Z" level=info msg="Container 9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:40:07.220362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681767838.mount: Deactivated successfully. Sep 16 04:40:07.232902 containerd[2009]: time="2025-09-16T04:40:07.232830153Z" level=info msg="CreateContainer within sandbox \"2693ff28a22312e26025fe51a05d68d2baa4377c19b15b866e9cf3026f95d2bb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783\"" Sep 16 04:40:07.234188 containerd[2009]: time="2025-09-16T04:40:07.234057213Z" level=info msg="StartContainer for \"9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783\"" Sep 16 04:40:07.236842 containerd[2009]: time="2025-09-16T04:40:07.236776965Z" level=info msg="connecting to shim 9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783" address="unix:///run/containerd/s/12a347be6869ceb08a5fad90f0cc2133e564f67f89625723baa9022c6cce0f01" protocol=ttrpc version=3 Sep 16 04:40:07.287960 systemd[1]: Started cri-containerd-9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783.scope - libcontainer container 9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783. Sep 16 04:40:07.345993 containerd[2009]: time="2025-09-16T04:40:07.345928641Z" level=info msg="StartContainer for \"9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783\" returns successfully" Sep 16 04:40:08.741668 kubelet[3330]: I0916 04:40:08.740905 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-4dmmd" podStartSLOduration=2.228205543 podStartE2EDuration="4.740880048s" podCreationTimestamp="2025-09-16 04:40:04 +0000 UTC" firstStartedPulling="2025-09-16 04:40:04.685044404 +0000 UTC m=+6.959434187" lastFinishedPulling="2025-09-16 04:40:07.197718933 +0000 UTC m=+9.472108692" observedRunningTime="2025-09-16 04:40:08.127017561 +0000 UTC m=+10.401407356" watchObservedRunningTime="2025-09-16 04:40:08.740880048 +0000 UTC m=+11.015269819" Sep 16 04:40:16.310211 sudo[2370]: pam_unix(sudo:session): session closed for user root Sep 16 04:40:16.334940 sshd[2369]: Connection closed by 147.75.109.163 port 40422 Sep 16 04:40:16.337944 sshd-session[2366]: pam_unix(sshd:session): session closed for user core Sep 16 04:40:16.345235 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 04:40:16.346247 systemd[1]: session-7.scope: Consumed 12.417s CPU time, 220.9M memory peak. Sep 16 04:40:16.352028 systemd[1]: sshd@6-172.31.31.59:22-147.75.109.163:40422.service: Deactivated successfully. Sep 16 04:40:16.359746 systemd-logind[1982]: Session 7 logged out. Waiting for processes to exit. Sep 16 04:40:16.366052 systemd-logind[1982]: Removed session 7. Sep 16 04:40:29.183203 systemd[1]: Created slice kubepods-besteffort-pod6fd5e795_dbfa_4fd4_825b_c36ea1e343db.slice - libcontainer container kubepods-besteffort-pod6fd5e795_dbfa_4fd4_825b_c36ea1e343db.slice. Sep 16 04:40:29.244281 kubelet[3330]: I0916 04:40:29.244211 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2khl\" (UniqueName: \"kubernetes.io/projected/6fd5e795-dbfa-4fd4-825b-c36ea1e343db-kube-api-access-m2khl\") pod \"calico-typha-6d56496955-gldlk\" (UID: \"6fd5e795-dbfa-4fd4-825b-c36ea1e343db\") " pod="calico-system/calico-typha-6d56496955-gldlk" Sep 16 04:40:29.244909 kubelet[3330]: I0916 04:40:29.244330 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6fd5e795-dbfa-4fd4-825b-c36ea1e343db-typha-certs\") pod \"calico-typha-6d56496955-gldlk\" (UID: \"6fd5e795-dbfa-4fd4-825b-c36ea1e343db\") " pod="calico-system/calico-typha-6d56496955-gldlk" Sep 16 04:40:29.244909 kubelet[3330]: I0916 04:40:29.244379 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fd5e795-dbfa-4fd4-825b-c36ea1e343db-tigera-ca-bundle\") pod \"calico-typha-6d56496955-gldlk\" (UID: \"6fd5e795-dbfa-4fd4-825b-c36ea1e343db\") " pod="calico-system/calico-typha-6d56496955-gldlk" Sep 16 04:40:29.491474 containerd[2009]: time="2025-09-16T04:40:29.490835575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d56496955-gldlk,Uid:6fd5e795-dbfa-4fd4-825b-c36ea1e343db,Namespace:calico-system,Attempt:0,}" Sep 16 04:40:29.550279 containerd[2009]: time="2025-09-16T04:40:29.550219832Z" level=info msg="connecting to shim 21dd8c896d83d30b64b042fbf12d21ced8edfabb54f6c210c239ff55e7a99ec8" address="unix:///run/containerd/s/70beef456947a7b0adce3e49dc22cb780a1df1518091bda586374de1ea3a9e6e" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:40:29.608414 systemd[1]: Created slice kubepods-besteffort-pod1220a7d0_bd24_4832_b1ed_00323eb05829.slice - libcontainer container kubepods-besteffort-pod1220a7d0_bd24_4832_b1ed_00323eb05829.slice. Sep 16 04:40:29.647896 kubelet[3330]: I0916 04:40:29.647843 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1220a7d0-bd24-4832-b1ed-00323eb05829-cni-log-dir\") pod \"calico-node-8qzkj\" (UID: \"1220a7d0-bd24-4832-b1ed-00323eb05829\") " pod="calico-system/calico-node-8qzkj" Sep 16 04:40:29.648532 kubelet[3330]: I0916 04:40:29.648144 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1220a7d0-bd24-4832-b1ed-00323eb05829-cni-net-dir\") pod \"calico-node-8qzkj\" (UID: \"1220a7d0-bd24-4832-b1ed-00323eb05829\") " pod="calico-system/calico-node-8qzkj" Sep 16 04:40:29.648532 kubelet[3330]: I0916 04:40:29.648248 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1220a7d0-bd24-4832-b1ed-00323eb05829-xtables-lock\") pod \"calico-node-8qzkj\" (UID: \"1220a7d0-bd24-4832-b1ed-00323eb05829\") " pod="calico-system/calico-node-8qzkj" Sep 16 04:40:29.648532 kubelet[3330]: I0916 04:40:29.648330 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1220a7d0-bd24-4832-b1ed-00323eb05829-node-certs\") pod \"calico-node-8qzkj\" (UID: \"1220a7d0-bd24-4832-b1ed-00323eb05829\") " pod="calico-system/calico-node-8qzkj" Sep 16 04:40:29.648532 kubelet[3330]: I0916 04:40:29.648410 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1220a7d0-bd24-4832-b1ed-00323eb05829-tigera-ca-bundle\") pod \"calico-node-8qzkj\" (UID: \"1220a7d0-bd24-4832-b1ed-00323eb05829\") " pod="calico-system/calico-node-8qzkj" Sep 16 04:40:29.649100 kubelet[3330]: I0916 04:40:29.648450 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1220a7d0-bd24-4832-b1ed-00323eb05829-var-lib-calico\") pod \"calico-node-8qzkj\" (UID: \"1220a7d0-bd24-4832-b1ed-00323eb05829\") " pod="calico-system/calico-node-8qzkj" Sep 16 04:40:29.649100 kubelet[3330]: I0916 04:40:29.648950 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1220a7d0-bd24-4832-b1ed-00323eb05829-cni-bin-dir\") pod \"calico-node-8qzkj\" (UID: \"1220a7d0-bd24-4832-b1ed-00323eb05829\") " pod="calico-system/calico-node-8qzkj" Sep 16 04:40:29.649100 kubelet[3330]: I0916 04:40:29.649034 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1220a7d0-bd24-4832-b1ed-00323eb05829-flexvol-driver-host\") pod \"calico-node-8qzkj\" (UID: \"1220a7d0-bd24-4832-b1ed-00323eb05829\") " pod="calico-system/calico-node-8qzkj" Sep 16 04:40:29.649442 kubelet[3330]: I0916 04:40:29.649075 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1220a7d0-bd24-4832-b1ed-00323eb05829-lib-modules\") pod \"calico-node-8qzkj\" (UID: \"1220a7d0-bd24-4832-b1ed-00323eb05829\") " pod="calico-system/calico-node-8qzkj" Sep 16 04:40:29.649631 kubelet[3330]: I0916 04:40:29.649606 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1220a7d0-bd24-4832-b1ed-00323eb05829-var-run-calico\") pod \"calico-node-8qzkj\" (UID: \"1220a7d0-bd24-4832-b1ed-00323eb05829\") " pod="calico-system/calico-node-8qzkj" Sep 16 04:40:29.649845 kubelet[3330]: I0916 04:40:29.649794 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xm2v\" (UniqueName: \"kubernetes.io/projected/1220a7d0-bd24-4832-b1ed-00323eb05829-kube-api-access-4xm2v\") pod \"calico-node-8qzkj\" (UID: \"1220a7d0-bd24-4832-b1ed-00323eb05829\") " pod="calico-system/calico-node-8qzkj" Sep 16 04:40:29.650025 kubelet[3330]: I0916 04:40:29.649966 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1220a7d0-bd24-4832-b1ed-00323eb05829-policysync\") pod \"calico-node-8qzkj\" (UID: \"1220a7d0-bd24-4832-b1ed-00323eb05829\") " pod="calico-system/calico-node-8qzkj" Sep 16 04:40:29.675072 systemd[1]: Started cri-containerd-21dd8c896d83d30b64b042fbf12d21ced8edfabb54f6c210c239ff55e7a99ec8.scope - libcontainer container 21dd8c896d83d30b64b042fbf12d21ced8edfabb54f6c210c239ff55e7a99ec8. Sep 16 04:40:29.757909 kubelet[3330]: E0916 04:40:29.757793 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.758090 kubelet[3330]: W0916 04:40:29.758060 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.759544 kubelet[3330]: E0916 04:40:29.759504 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.764693 kubelet[3330]: E0916 04:40:29.761216 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.764693 kubelet[3330]: W0916 04:40:29.761249 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.764693 kubelet[3330]: E0916 04:40:29.761418 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.764693 kubelet[3330]: E0916 04:40:29.762400 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.764693 kubelet[3330]: W0916 04:40:29.762429 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.764693 kubelet[3330]: E0916 04:40:29.762458 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.764693 kubelet[3330]: E0916 04:40:29.763362 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.764693 kubelet[3330]: W0916 04:40:29.763504 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.764693 kubelet[3330]: E0916 04:40:29.763534 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.764693 kubelet[3330]: E0916 04:40:29.764507 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.765256 kubelet[3330]: W0916 04:40:29.764547 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.765256 kubelet[3330]: E0916 04:40:29.764575 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.766291 kubelet[3330]: E0916 04:40:29.765766 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.766291 kubelet[3330]: W0916 04:40:29.766080 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.768668 kubelet[3330]: E0916 04:40:29.766480 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.768668 kubelet[3330]: E0916 04:40:29.768327 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.768668 kubelet[3330]: W0916 04:40:29.768380 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.768668 kubelet[3330]: E0916 04:40:29.768412 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.771090 kubelet[3330]: E0916 04:40:29.769282 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.771090 kubelet[3330]: W0916 04:40:29.769453 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.771090 kubelet[3330]: E0916 04:40:29.769486 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.772678 kubelet[3330]: E0916 04:40:29.771717 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.772678 kubelet[3330]: W0916 04:40:29.771756 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.772678 kubelet[3330]: E0916 04:40:29.771827 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.776282 kubelet[3330]: E0916 04:40:29.773446 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.776282 kubelet[3330]: W0916 04:40:29.773488 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.776282 kubelet[3330]: E0916 04:40:29.773546 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.796900 kubelet[3330]: E0916 04:40:29.796864 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.797311 kubelet[3330]: W0916 04:40:29.797180 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.797311 kubelet[3330]: E0916 04:40:29.797226 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.847777 containerd[2009]: time="2025-09-16T04:40:29.847691469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d56496955-gldlk,Uid:6fd5e795-dbfa-4fd4-825b-c36ea1e343db,Namespace:calico-system,Attempt:0,} returns sandbox id \"21dd8c896d83d30b64b042fbf12d21ced8edfabb54f6c210c239ff55e7a99ec8\"" Sep 16 04:40:29.853415 containerd[2009]: time="2025-09-16T04:40:29.853205865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 16 04:40:29.897665 kubelet[3330]: E0916 04:40:29.895871 3330 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp4p4" podUID="838fa562-42c6-4d1d-a9fb-07d8f39bc7c5" Sep 16 04:40:29.925297 kubelet[3330]: E0916 04:40:29.925259 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.925679 kubelet[3330]: W0916 04:40:29.925468 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.925679 kubelet[3330]: E0916 04:40:29.925505 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.926314 kubelet[3330]: E0916 04:40:29.926195 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.926754 kubelet[3330]: W0916 04:40:29.926558 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.926854 containerd[2009]: time="2025-09-16T04:40:29.926717590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8qzkj,Uid:1220a7d0-bd24-4832-b1ed-00323eb05829,Namespace:calico-system,Attempt:0,}" Sep 16 04:40:29.927438 kubelet[3330]: E0916 04:40:29.927137 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.928112 kubelet[3330]: E0916 04:40:29.927984 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.928766 kubelet[3330]: W0916 04:40:29.928335 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.928766 kubelet[3330]: E0916 04:40:29.928376 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.930677 kubelet[3330]: E0916 04:40:29.929324 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.931167 kubelet[3330]: W0916 04:40:29.930883 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.931167 kubelet[3330]: E0916 04:40:29.930933 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.931549 kubelet[3330]: E0916 04:40:29.931523 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.931700 kubelet[3330]: W0916 04:40:29.931676 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.931974 kubelet[3330]: E0916 04:40:29.931805 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.933745 kubelet[3330]: E0916 04:40:29.932633 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.935198 kubelet[3330]: W0916 04:40:29.933858 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.935198 kubelet[3330]: E0916 04:40:29.933894 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.935198 kubelet[3330]: E0916 04:40:29.934313 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.935198 kubelet[3330]: W0916 04:40:29.934334 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.935198 kubelet[3330]: E0916 04:40:29.934356 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.937835 kubelet[3330]: E0916 04:40:29.937786 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.938064 kubelet[3330]: W0916 04:40:29.937824 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.938136 kubelet[3330]: E0916 04:40:29.938074 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.940208 kubelet[3330]: E0916 04:40:29.939749 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.940357 kubelet[3330]: W0916 04:40:29.940320 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.940410 kubelet[3330]: E0916 04:40:29.940361 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.944971 kubelet[3330]: E0916 04:40:29.944918 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.944971 kubelet[3330]: W0916 04:40:29.944958 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.945165 kubelet[3330]: E0916 04:40:29.944992 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.948609 kubelet[3330]: E0916 04:40:29.948272 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.948609 kubelet[3330]: W0916 04:40:29.948601 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.949482 kubelet[3330]: E0916 04:40:29.949356 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.951763 kubelet[3330]: E0916 04:40:29.951696 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.952911 kubelet[3330]: W0916 04:40:29.952738 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.954686 kubelet[3330]: E0916 04:40:29.953558 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.955111 kubelet[3330]: E0916 04:40:29.955083 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.955239 kubelet[3330]: W0916 04:40:29.955214 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.955351 kubelet[3330]: E0916 04:40:29.955328 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.955789 kubelet[3330]: E0916 04:40:29.955763 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.956194 kubelet[3330]: W0916 04:40:29.955905 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.956194 kubelet[3330]: E0916 04:40:29.955943 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.956807 kubelet[3330]: E0916 04:40:29.956773 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.957753 kubelet[3330]: W0916 04:40:29.957713 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.957949 kubelet[3330]: E0916 04:40:29.957925 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.962360 kubelet[3330]: E0916 04:40:29.962235 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.962360 kubelet[3330]: W0916 04:40:29.962267 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.962360 kubelet[3330]: E0916 04:40:29.962298 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.964355 kubelet[3330]: E0916 04:40:29.964289 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.964776 kubelet[3330]: W0916 04:40:29.964742 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.966360 kubelet[3330]: E0916 04:40:29.964931 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.966360 kubelet[3330]: E0916 04:40:29.965729 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.966360 kubelet[3330]: W0916 04:40:29.965758 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.966360 kubelet[3330]: E0916 04:40:29.965786 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.967456 kubelet[3330]: E0916 04:40:29.967420 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.967616 kubelet[3330]: W0916 04:40:29.967590 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.967911 kubelet[3330]: E0916 04:40:29.967883 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.969108 kubelet[3330]: E0916 04:40:29.969071 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.969371 kubelet[3330]: W0916 04:40:29.969264 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.969371 kubelet[3330]: E0916 04:40:29.969327 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.971127 kubelet[3330]: E0916 04:40:29.970929 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.972062 kubelet[3330]: W0916 04:40:29.971904 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.972754 kubelet[3330]: E0916 04:40:29.972310 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.972754 kubelet[3330]: I0916 04:40:29.972370 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mbfc\" (UniqueName: \"kubernetes.io/projected/838fa562-42c6-4d1d-a9fb-07d8f39bc7c5-kube-api-access-9mbfc\") pod \"csi-node-driver-cp4p4\" (UID: \"838fa562-42c6-4d1d-a9fb-07d8f39bc7c5\") " pod="calico-system/csi-node-driver-cp4p4" Sep 16 04:40:29.974597 kubelet[3330]: E0916 04:40:29.974463 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.976510 kubelet[3330]: W0916 04:40:29.974906 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.976510 kubelet[3330]: E0916 04:40:29.974956 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.976510 kubelet[3330]: I0916 04:40:29.975019 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/838fa562-42c6-4d1d-a9fb-07d8f39bc7c5-varrun\") pod \"csi-node-driver-cp4p4\" (UID: \"838fa562-42c6-4d1d-a9fb-07d8f39bc7c5\") " pod="calico-system/csi-node-driver-cp4p4" Sep 16 04:40:29.978087 kubelet[3330]: E0916 04:40:29.978048 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.978455 kubelet[3330]: W0916 04:40:29.978234 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.978455 kubelet[3330]: E0916 04:40:29.978280 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.978455 kubelet[3330]: I0916 04:40:29.978324 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/838fa562-42c6-4d1d-a9fb-07d8f39bc7c5-socket-dir\") pod \"csi-node-driver-cp4p4\" (UID: \"838fa562-42c6-4d1d-a9fb-07d8f39bc7c5\") " pod="calico-system/csi-node-driver-cp4p4" Sep 16 04:40:29.981518 kubelet[3330]: E0916 04:40:29.980746 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.981518 kubelet[3330]: W0916 04:40:29.980781 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.981518 kubelet[3330]: E0916 04:40:29.980818 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.981518 kubelet[3330]: I0916 04:40:29.980866 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/838fa562-42c6-4d1d-a9fb-07d8f39bc7c5-kubelet-dir\") pod \"csi-node-driver-cp4p4\" (UID: \"838fa562-42c6-4d1d-a9fb-07d8f39bc7c5\") " pod="calico-system/csi-node-driver-cp4p4" Sep 16 04:40:29.984296 kubelet[3330]: E0916 04:40:29.984225 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.984296 kubelet[3330]: W0916 04:40:29.984259 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.985507 kubelet[3330]: E0916 04:40:29.984686 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.985507 kubelet[3330]: I0916 04:40:29.985223 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/838fa562-42c6-4d1d-a9fb-07d8f39bc7c5-registration-dir\") pod \"csi-node-driver-cp4p4\" (UID: \"838fa562-42c6-4d1d-a9fb-07d8f39bc7c5\") " pod="calico-system/csi-node-driver-cp4p4" Sep 16 04:40:29.987845 kubelet[3330]: E0916 04:40:29.986514 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.987845 kubelet[3330]: W0916 04:40:29.987700 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.988389 kubelet[3330]: E0916 04:40:29.988346 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.991966 kubelet[3330]: E0916 04:40:29.991835 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.991966 kubelet[3330]: W0916 04:40:29.991910 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.992485 kubelet[3330]: E0916 04:40:29.992357 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.993324 kubelet[3330]: E0916 04:40:29.993243 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.993324 kubelet[3330]: W0916 04:40:29.993274 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.993603 kubelet[3330]: E0916 04:40:29.993574 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.994295 kubelet[3330]: E0916 04:40:29.994248 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.994576 kubelet[3330]: W0916 04:40:29.994377 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.994576 kubelet[3330]: E0916 04:40:29.994454 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.995349 kubelet[3330]: E0916 04:40:29.995317 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.995695 kubelet[3330]: W0916 04:40:29.995410 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.995695 kubelet[3330]: E0916 04:40:29.995496 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.997917 kubelet[3330]: E0916 04:40:29.997858 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.998388 kubelet[3330]: W0916 04:40:29.998092 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.998388 kubelet[3330]: E0916 04:40:29.998133 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:29.999281 kubelet[3330]: E0916 04:40:29.999234 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:29.999712 kubelet[3330]: W0916 04:40:29.999391 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:29.999712 kubelet[3330]: E0916 04:40:29.999424 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.000316 kubelet[3330]: E0916 04:40:30.000100 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.000316 kubelet[3330]: W0916 04:40:30.000126 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.000316 kubelet[3330]: E0916 04:40:30.000150 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.000709 kubelet[3330]: E0916 04:40:30.000684 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.000985 kubelet[3330]: W0916 04:40:30.000790 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.001148 kubelet[3330]: E0916 04:40:30.000821 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.002029 kubelet[3330]: E0916 04:40:30.001976 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.002967 kubelet[3330]: W0916 04:40:30.002152 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.002967 kubelet[3330]: E0916 04:40:30.002190 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.008015 containerd[2009]: time="2025-09-16T04:40:30.007838070Z" level=info msg="connecting to shim 91dc171900f8030a80dcd0163ed967e76d1dbfe073249592569d53dbb87650b7" address="unix:///run/containerd/s/c68ace1eefb7b43cfd0a06e29feae8cc67fd104ca4a0391cb38114cd38dbf72f" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:40:30.087984 kubelet[3330]: E0916 04:40:30.087540 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.088737 kubelet[3330]: W0916 04:40:30.088696 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.089303 kubelet[3330]: E0916 04:40:30.089122 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.091801 kubelet[3330]: E0916 04:40:30.091597 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.091801 kubelet[3330]: W0916 04:40:30.091680 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.091801 kubelet[3330]: E0916 04:40:30.091740 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.092338 systemd[1]: Started cri-containerd-91dc171900f8030a80dcd0163ed967e76d1dbfe073249592569d53dbb87650b7.scope - libcontainer container 91dc171900f8030a80dcd0163ed967e76d1dbfe073249592569d53dbb87650b7. Sep 16 04:40:30.094703 kubelet[3330]: E0916 04:40:30.094399 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.095285 kubelet[3330]: W0916 04:40:30.094876 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.095464 kubelet[3330]: E0916 04:40:30.095408 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.098514 kubelet[3330]: E0916 04:40:30.098465 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.098514 kubelet[3330]: W0916 04:40:30.098503 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.099367 kubelet[3330]: E0916 04:40:30.098552 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.101207 kubelet[3330]: E0916 04:40:30.101151 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.101207 kubelet[3330]: W0916 04:40:30.101191 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.101815 kubelet[3330]: E0916 04:40:30.101295 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.101815 kubelet[3330]: E0916 04:40:30.101766 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.101815 kubelet[3330]: W0916 04:40:30.101786 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.102690 kubelet[3330]: E0916 04:40:30.102114 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.102690 kubelet[3330]: W0916 04:40:30.102143 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.102690 kubelet[3330]: E0916 04:40:30.102203 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.102690 kubelet[3330]: E0916 04:40:30.102436 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.102690 kubelet[3330]: W0916 04:40:30.102605 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.102690 kubelet[3330]: E0916 04:40:30.102450 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.103149 kubelet[3330]: E0916 04:40:30.102631 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.103663 kubelet[3330]: E0916 04:40:30.103263 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.103663 kubelet[3330]: W0916 04:40:30.103293 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.103663 kubelet[3330]: E0916 04:40:30.103332 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.104692 kubelet[3330]: E0916 04:40:30.104651 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.104907 kubelet[3330]: W0916 04:40:30.104690 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.104907 kubelet[3330]: E0916 04:40:30.104810 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.105084 kubelet[3330]: E0916 04:40:30.104976 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.105084 kubelet[3330]: W0916 04:40:30.104991 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.105084 kubelet[3330]: E0916 04:40:30.105042 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.105289 kubelet[3330]: E0916 04:40:30.105256 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.105289 kubelet[3330]: W0916 04:40:30.105282 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.105755 kubelet[3330]: E0916 04:40:30.105383 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.105755 kubelet[3330]: E0916 04:40:30.105539 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.105755 kubelet[3330]: W0916 04:40:30.105555 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.105910 kubelet[3330]: E0916 04:40:30.105860 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.105910 kubelet[3330]: W0916 04:40:30.105875 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.107140 kubelet[3330]: E0916 04:40:30.106099 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.107140 kubelet[3330]: W0916 04:40:30.106124 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.107140 kubelet[3330]: E0916 04:40:30.106147 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.107140 kubelet[3330]: E0916 04:40:30.106960 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.107140 kubelet[3330]: E0916 04:40:30.107018 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.107140 kubelet[3330]: E0916 04:40:30.107022 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.107140 kubelet[3330]: W0916 04:40:30.107098 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.108992 kubelet[3330]: E0916 04:40:30.108921 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.109443 kubelet[3330]: E0916 04:40:30.109168 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.109443 kubelet[3330]: W0916 04:40:30.109191 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.109443 kubelet[3330]: E0916 04:40:30.109230 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.109720 kubelet[3330]: E0916 04:40:30.109699 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.110810 kubelet[3330]: W0916 04:40:30.110768 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.111038 kubelet[3330]: E0916 04:40:30.110998 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.111479 kubelet[3330]: E0916 04:40:30.111425 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.111776 kubelet[3330]: W0916 04:40:30.111678 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.112249 kubelet[3330]: E0916 04:40:30.112121 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.113136 kubelet[3330]: E0916 04:40:30.113098 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.113533 kubelet[3330]: W0916 04:40:30.113504 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.114072 kubelet[3330]: E0916 04:40:30.113711 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.115995 kubelet[3330]: E0916 04:40:30.115855 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.116676 kubelet[3330]: W0916 04:40:30.116440 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.117122 kubelet[3330]: E0916 04:40:30.117067 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.118313 kubelet[3330]: E0916 04:40:30.118281 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.118704 kubelet[3330]: W0916 04:40:30.118511 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.119604 kubelet[3330]: E0916 04:40:30.118989 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.120679 kubelet[3330]: E0916 04:40:30.120577 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.121566 kubelet[3330]: W0916 04:40:30.120947 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.121566 kubelet[3330]: E0916 04:40:30.121002 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.122315 kubelet[3330]: E0916 04:40:30.122191 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.122616 kubelet[3330]: W0916 04:40:30.122582 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.123046 kubelet[3330]: E0916 04:40:30.122912 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.124110 kubelet[3330]: E0916 04:40:30.123986 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.124358 kubelet[3330]: W0916 04:40:30.124332 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.124522 kubelet[3330]: E0916 04:40:30.124461 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.205680 kubelet[3330]: E0916 04:40:30.205136 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:30.205982 kubelet[3330]: W0916 04:40:30.205881 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:30.205982 kubelet[3330]: E0916 04:40:30.205929 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:30.251118 containerd[2009]: time="2025-09-16T04:40:30.251046007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8qzkj,Uid:1220a7d0-bd24-4832-b1ed-00323eb05829,Namespace:calico-system,Attempt:0,} returns sandbox id \"91dc171900f8030a80dcd0163ed967e76d1dbfe073249592569d53dbb87650b7\"" Sep 16 04:40:30.970911 kubelet[3330]: E0916 04:40:30.970863 3330 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp4p4" podUID="838fa562-42c6-4d1d-a9fb-07d8f39bc7c5" Sep 16 04:40:31.233011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3035872989.mount: Deactivated successfully. Sep 16 04:40:32.381805 containerd[2009]: time="2025-09-16T04:40:32.380632714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:32.381805 containerd[2009]: time="2025-09-16T04:40:32.381758614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 16 04:40:32.382584 containerd[2009]: time="2025-09-16T04:40:32.382532326Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:32.385801 containerd[2009]: time="2025-09-16T04:40:32.385738318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:32.387483 containerd[2009]: time="2025-09-16T04:40:32.387432526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.534129137s" Sep 16 04:40:32.387659 containerd[2009]: time="2025-09-16T04:40:32.387612358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 16 04:40:32.390162 containerd[2009]: time="2025-09-16T04:40:32.390094210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 16 04:40:32.420436 containerd[2009]: time="2025-09-16T04:40:32.420389818Z" level=info msg="CreateContainer within sandbox \"21dd8c896d83d30b64b042fbf12d21ced8edfabb54f6c210c239ff55e7a99ec8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 16 04:40:32.433065 containerd[2009]: time="2025-09-16T04:40:32.433018402Z" level=info msg="Container 2d72a04a701d4e24872ee3a5fd9570664ed0ed25ca171f42cefb0f6992e27f4a: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:40:32.442692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4283929360.mount: Deactivated successfully. Sep 16 04:40:32.451036 containerd[2009]: time="2025-09-16T04:40:32.450842614Z" level=info msg="CreateContainer within sandbox \"21dd8c896d83d30b64b042fbf12d21ced8edfabb54f6c210c239ff55e7a99ec8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2d72a04a701d4e24872ee3a5fd9570664ed0ed25ca171f42cefb0f6992e27f4a\"" Sep 16 04:40:32.453716 containerd[2009]: time="2025-09-16T04:40:32.453626590Z" level=info msg="StartContainer for \"2d72a04a701d4e24872ee3a5fd9570664ed0ed25ca171f42cefb0f6992e27f4a\"" Sep 16 04:40:32.456112 containerd[2009]: time="2025-09-16T04:40:32.456040594Z" level=info msg="connecting to shim 2d72a04a701d4e24872ee3a5fd9570664ed0ed25ca171f42cefb0f6992e27f4a" address="unix:///run/containerd/s/70beef456947a7b0adce3e49dc22cb780a1df1518091bda586374de1ea3a9e6e" protocol=ttrpc version=3 Sep 16 04:40:32.494376 systemd[1]: Started cri-containerd-2d72a04a701d4e24872ee3a5fd9570664ed0ed25ca171f42cefb0f6992e27f4a.scope - libcontainer container 2d72a04a701d4e24872ee3a5fd9570664ed0ed25ca171f42cefb0f6992e27f4a. Sep 16 04:40:32.579061 containerd[2009]: time="2025-09-16T04:40:32.578983715Z" level=info msg="StartContainer for \"2d72a04a701d4e24872ee3a5fd9570664ed0ed25ca171f42cefb0f6992e27f4a\" returns successfully" Sep 16 04:40:32.973724 kubelet[3330]: E0916 04:40:32.973326 3330 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp4p4" podUID="838fa562-42c6-4d1d-a9fb-07d8f39bc7c5" Sep 16 04:40:33.299392 kubelet[3330]: E0916 04:40:33.299246 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.299392 kubelet[3330]: W0916 04:40:33.299287 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.299392 kubelet[3330]: E0916 04:40:33.299322 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.301759 kubelet[3330]: E0916 04:40:33.301704 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.301899 kubelet[3330]: W0916 04:40:33.301747 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.301899 kubelet[3330]: E0916 04:40:33.301862 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.302997 kubelet[3330]: E0916 04:40:33.302936 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.302997 kubelet[3330]: W0916 04:40:33.302989 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.303355 kubelet[3330]: E0916 04:40:33.303025 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.303958 kubelet[3330]: E0916 04:40:33.303914 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.303958 kubelet[3330]: W0916 04:40:33.303951 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.304272 kubelet[3330]: E0916 04:40:33.303982 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.304978 kubelet[3330]: E0916 04:40:33.304932 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.304978 kubelet[3330]: W0916 04:40:33.304967 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.305147 kubelet[3330]: E0916 04:40:33.305000 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.307049 kubelet[3330]: E0916 04:40:33.306248 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.307049 kubelet[3330]: W0916 04:40:33.306292 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.307049 kubelet[3330]: E0916 04:40:33.306457 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.308914 kubelet[3330]: E0916 04:40:33.308864 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.308914 kubelet[3330]: W0916 04:40:33.308903 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.309158 kubelet[3330]: E0916 04:40:33.308936 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.311027 kubelet[3330]: E0916 04:40:33.310975 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.311027 kubelet[3330]: W0916 04:40:33.311015 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.311423 kubelet[3330]: E0916 04:40:33.311048 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.312948 kubelet[3330]: E0916 04:40:33.312882 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.312948 kubelet[3330]: W0916 04:40:33.312936 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.313160 kubelet[3330]: E0916 04:40:33.312969 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.313415 kubelet[3330]: E0916 04:40:33.313379 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.313415 kubelet[3330]: W0916 04:40:33.313408 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.313558 kubelet[3330]: E0916 04:40:33.313433 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.313904 kubelet[3330]: E0916 04:40:33.313867 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.313975 kubelet[3330]: W0916 04:40:33.313897 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.313975 kubelet[3330]: E0916 04:40:33.313939 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.314989 kubelet[3330]: E0916 04:40:33.314931 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.314989 kubelet[3330]: W0916 04:40:33.314983 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.315239 kubelet[3330]: E0916 04:40:33.315016 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.315557 kubelet[3330]: E0916 04:40:33.315519 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.315557 kubelet[3330]: W0916 04:40:33.315551 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.315794 kubelet[3330]: E0916 04:40:33.315576 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.317601 kubelet[3330]: E0916 04:40:33.317552 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.317601 kubelet[3330]: W0916 04:40:33.317591 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.317860 kubelet[3330]: E0916 04:40:33.317624 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.318344 kubelet[3330]: E0916 04:40:33.318304 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.318344 kubelet[3330]: W0916 04:40:33.318337 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.318689 kubelet[3330]: E0916 04:40:33.318366 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.335140 kubelet[3330]: E0916 04:40:33.334793 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.335140 kubelet[3330]: W0916 04:40:33.334832 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.335140 kubelet[3330]: E0916 04:40:33.334869 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.335755 kubelet[3330]: E0916 04:40:33.335728 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.335869 kubelet[3330]: W0916 04:40:33.335846 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.336007 kubelet[3330]: E0916 04:40:33.335983 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.336825 kubelet[3330]: E0916 04:40:33.336774 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.336825 kubelet[3330]: W0916 04:40:33.336817 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.337109 kubelet[3330]: E0916 04:40:33.336863 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.337943 kubelet[3330]: E0916 04:40:33.337897 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.337943 kubelet[3330]: W0916 04:40:33.337934 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.338258 kubelet[3330]: E0916 04:40:33.337982 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.339883 kubelet[3330]: E0916 04:40:33.339832 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.339883 kubelet[3330]: W0916 04:40:33.339878 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.340214 kubelet[3330]: E0916 04:40:33.340048 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.340946 kubelet[3330]: E0916 04:40:33.340902 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.340946 kubelet[3330]: W0916 04:40:33.340938 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.341262 kubelet[3330]: E0916 04:40:33.341073 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.342981 kubelet[3330]: E0916 04:40:33.342916 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.342981 kubelet[3330]: W0916 04:40:33.342969 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.343315 kubelet[3330]: E0916 04:40:33.343144 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.343683 kubelet[3330]: E0916 04:40:33.343610 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.343683 kubelet[3330]: W0916 04:40:33.343671 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.344722 kubelet[3330]: E0916 04:40:33.343977 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.344722 kubelet[3330]: W0916 04:40:33.343995 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.344722 kubelet[3330]: E0916 04:40:33.344231 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.344722 kubelet[3330]: E0916 04:40:33.344258 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.344722 kubelet[3330]: W0916 04:40:33.344274 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.344722 kubelet[3330]: E0916 04:40:33.344294 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.344722 kubelet[3330]: E0916 04:40:33.344507 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.344722 kubelet[3330]: E0916 04:40:33.344521 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.344722 kubelet[3330]: W0916 04:40:33.344535 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.344722 kubelet[3330]: E0916 04:40:33.344586 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.345217 kubelet[3330]: E0916 04:40:33.344929 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.345217 kubelet[3330]: W0916 04:40:33.344946 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.345217 kubelet[3330]: E0916 04:40:33.344966 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.346579 kubelet[3330]: E0916 04:40:33.346527 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.346579 kubelet[3330]: W0916 04:40:33.346566 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.346980 kubelet[3330]: E0916 04:40:33.346695 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.348033 kubelet[3330]: E0916 04:40:33.347983 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.348033 kubelet[3330]: W0916 04:40:33.348021 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.348399 kubelet[3330]: E0916 04:40:33.348171 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.348399 kubelet[3330]: E0916 04:40:33.348356 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.348399 kubelet[3330]: W0916 04:40:33.348373 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.348399 kubelet[3330]: E0916 04:40:33.348393 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.348705 kubelet[3330]: E0916 04:40:33.348668 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.348705 kubelet[3330]: W0916 04:40:33.348684 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.348705 kubelet[3330]: E0916 04:40:33.348702 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.349851 kubelet[3330]: E0916 04:40:33.349804 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.349851 kubelet[3330]: W0916 04:40:33.349841 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.350146 kubelet[3330]: E0916 04:40:33.349873 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.350972 kubelet[3330]: E0916 04:40:33.350909 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:40:33.350972 kubelet[3330]: W0916 04:40:33.350947 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:40:33.351155 kubelet[3330]: E0916 04:40:33.350996 3330 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:40:33.714691 containerd[2009]: time="2025-09-16T04:40:33.714593688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:33.719690 containerd[2009]: time="2025-09-16T04:40:33.719027772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 16 04:40:33.720805 containerd[2009]: time="2025-09-16T04:40:33.720620856Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:33.727563 containerd[2009]: time="2025-09-16T04:40:33.727364088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:33.729846 containerd[2009]: time="2025-09-16T04:40:33.729758400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.339598862s" Sep 16 04:40:33.729934 containerd[2009]: time="2025-09-16T04:40:33.729848244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 16 04:40:33.741153 containerd[2009]: time="2025-09-16T04:40:33.741073080Z" level=info msg="CreateContainer within sandbox \"91dc171900f8030a80dcd0163ed967e76d1dbfe073249592569d53dbb87650b7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 16 04:40:33.761035 containerd[2009]: time="2025-09-16T04:40:33.759728329Z" level=info msg="Container 101a619c5131a2f0cf28574c3b93b19c2959892718424ceee31a19a8373df2d8: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:40:33.777007 containerd[2009]: time="2025-09-16T04:40:33.776934121Z" level=info msg="CreateContainer within sandbox \"91dc171900f8030a80dcd0163ed967e76d1dbfe073249592569d53dbb87650b7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"101a619c5131a2f0cf28574c3b93b19c2959892718424ceee31a19a8373df2d8\"" Sep 16 04:40:33.778692 containerd[2009]: time="2025-09-16T04:40:33.778611133Z" level=info msg="StartContainer for \"101a619c5131a2f0cf28574c3b93b19c2959892718424ceee31a19a8373df2d8\"" Sep 16 04:40:33.782377 containerd[2009]: time="2025-09-16T04:40:33.782232133Z" level=info msg="connecting to shim 101a619c5131a2f0cf28574c3b93b19c2959892718424ceee31a19a8373df2d8" address="unix:///run/containerd/s/c68ace1eefb7b43cfd0a06e29feae8cc67fd104ca4a0391cb38114cd38dbf72f" protocol=ttrpc version=3 Sep 16 04:40:33.849433 systemd[1]: Started cri-containerd-101a619c5131a2f0cf28574c3b93b19c2959892718424ceee31a19a8373df2d8.scope - libcontainer container 101a619c5131a2f0cf28574c3b93b19c2959892718424ceee31a19a8373df2d8. Sep 16 04:40:33.976729 containerd[2009]: time="2025-09-16T04:40:33.975678218Z" level=info msg="StartContainer for \"101a619c5131a2f0cf28574c3b93b19c2959892718424ceee31a19a8373df2d8\" returns successfully" Sep 16 04:40:34.008239 systemd[1]: cri-containerd-101a619c5131a2f0cf28574c3b93b19c2959892718424ceee31a19a8373df2d8.scope: Deactivated successfully. Sep 16 04:40:34.018863 containerd[2009]: time="2025-09-16T04:40:34.018801850Z" level=info msg="TaskExit event in podsandbox handler container_id:\"101a619c5131a2f0cf28574c3b93b19c2959892718424ceee31a19a8373df2d8\" id:\"101a619c5131a2f0cf28574c3b93b19c2959892718424ceee31a19a8373df2d8\" pid:4189 exited_at:{seconds:1757997634 nanos:18024310}" Sep 16 04:40:34.019198 containerd[2009]: time="2025-09-16T04:40:34.019030942Z" level=info msg="received exit event container_id:\"101a619c5131a2f0cf28574c3b93b19c2959892718424ceee31a19a8373df2d8\" id:\"101a619c5131a2f0cf28574c3b93b19c2959892718424ceee31a19a8373df2d8\" pid:4189 exited_at:{seconds:1757997634 nanos:18024310}" Sep 16 04:40:34.062547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-101a619c5131a2f0cf28574c3b93b19c2959892718424ceee31a19a8373df2d8-rootfs.mount: Deactivated successfully. Sep 16 04:40:34.274875 kubelet[3330]: I0916 04:40:34.273935 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d56496955-gldlk" podStartSLOduration=2.737394682 podStartE2EDuration="5.273909299s" podCreationTimestamp="2025-09-16 04:40:29 +0000 UTC" firstStartedPulling="2025-09-16 04:40:29.852423105 +0000 UTC m=+32.126812864" lastFinishedPulling="2025-09-16 04:40:32.388937626 +0000 UTC m=+34.663327481" observedRunningTime="2025-09-16 04:40:33.393505871 +0000 UTC m=+35.667895678" watchObservedRunningTime="2025-09-16 04:40:34.273909299 +0000 UTC m=+36.548299070" Sep 16 04:40:34.973436 kubelet[3330]: E0916 04:40:34.971870 3330 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp4p4" podUID="838fa562-42c6-4d1d-a9fb-07d8f39bc7c5" Sep 16 04:40:35.251260 containerd[2009]: time="2025-09-16T04:40:35.250914624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 16 04:40:36.971294 kubelet[3330]: E0916 04:40:36.971215 3330 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp4p4" podUID="838fa562-42c6-4d1d-a9fb-07d8f39bc7c5" Sep 16 04:40:38.971113 kubelet[3330]: E0916 04:40:38.971058 3330 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp4p4" podUID="838fa562-42c6-4d1d-a9fb-07d8f39bc7c5" Sep 16 04:40:39.184981 containerd[2009]: time="2025-09-16T04:40:39.184916727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:39.186415 containerd[2009]: time="2025-09-16T04:40:39.186202539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 16 04:40:39.187402 containerd[2009]: time="2025-09-16T04:40:39.187347196Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:39.190872 containerd[2009]: time="2025-09-16T04:40:39.190820944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:39.192722 containerd[2009]: time="2025-09-16T04:40:39.192203884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.941211392s" Sep 16 04:40:39.192722 containerd[2009]: time="2025-09-16T04:40:39.192259072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 16 04:40:39.196720 containerd[2009]: time="2025-09-16T04:40:39.196576432Z" level=info msg="CreateContainer within sandbox \"91dc171900f8030a80dcd0163ed967e76d1dbfe073249592569d53dbb87650b7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 16 04:40:39.209678 containerd[2009]: time="2025-09-16T04:40:39.208958476Z" level=info msg="Container 5cda16ce6582486dabd7ee73c16bda775ab99c70e18e752800c96ee0aead4f55: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:40:39.233146 containerd[2009]: time="2025-09-16T04:40:39.232967056Z" level=info msg="CreateContainer within sandbox \"91dc171900f8030a80dcd0163ed967e76d1dbfe073249592569d53dbb87650b7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5cda16ce6582486dabd7ee73c16bda775ab99c70e18e752800c96ee0aead4f55\"" Sep 16 04:40:39.236680 containerd[2009]: time="2025-09-16T04:40:39.235838500Z" level=info msg="StartContainer for \"5cda16ce6582486dabd7ee73c16bda775ab99c70e18e752800c96ee0aead4f55\"" Sep 16 04:40:39.239400 containerd[2009]: time="2025-09-16T04:40:39.239297140Z" level=info msg="connecting to shim 5cda16ce6582486dabd7ee73c16bda775ab99c70e18e752800c96ee0aead4f55" address="unix:///run/containerd/s/c68ace1eefb7b43cfd0a06e29feae8cc67fd104ca4a0391cb38114cd38dbf72f" protocol=ttrpc version=3 Sep 16 04:40:39.288964 systemd[1]: Started cri-containerd-5cda16ce6582486dabd7ee73c16bda775ab99c70e18e752800c96ee0aead4f55.scope - libcontainer container 5cda16ce6582486dabd7ee73c16bda775ab99c70e18e752800c96ee0aead4f55. Sep 16 04:40:39.387051 containerd[2009]: time="2025-09-16T04:40:39.386908480Z" level=info msg="StartContainer for \"5cda16ce6582486dabd7ee73c16bda775ab99c70e18e752800c96ee0aead4f55\" returns successfully" Sep 16 04:40:40.389519 containerd[2009]: time="2025-09-16T04:40:40.389426897Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:40:40.395163 systemd[1]: cri-containerd-5cda16ce6582486dabd7ee73c16bda775ab99c70e18e752800c96ee0aead4f55.scope: Deactivated successfully. Sep 16 04:40:40.397759 systemd[1]: cri-containerd-5cda16ce6582486dabd7ee73c16bda775ab99c70e18e752800c96ee0aead4f55.scope: Consumed 941ms CPU time, 185.1M memory peak, 165.8M written to disk. Sep 16 04:40:40.398471 containerd[2009]: time="2025-09-16T04:40:40.398300346Z" level=info msg="received exit event container_id:\"5cda16ce6582486dabd7ee73c16bda775ab99c70e18e752800c96ee0aead4f55\" id:\"5cda16ce6582486dabd7ee73c16bda775ab99c70e18e752800c96ee0aead4f55\" pid:4253 exited_at:{seconds:1757997640 nanos:397594110}" Sep 16 04:40:40.400142 containerd[2009]: time="2025-09-16T04:40:40.398613678Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cda16ce6582486dabd7ee73c16bda775ab99c70e18e752800c96ee0aead4f55\" id:\"5cda16ce6582486dabd7ee73c16bda775ab99c70e18e752800c96ee0aead4f55\" pid:4253 exited_at:{seconds:1757997640 nanos:397594110}" Sep 16 04:40:40.439074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cda16ce6582486dabd7ee73c16bda775ab99c70e18e752800c96ee0aead4f55-rootfs.mount: Deactivated successfully. Sep 16 04:40:40.446680 kubelet[3330]: I0916 04:40:40.446605 3330 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 16 04:40:40.531335 systemd[1]: Created slice kubepods-burstable-pod58ba772f_28f4_4f9c_bcd4_9c91f3a85256.slice - libcontainer container kubepods-burstable-pod58ba772f_28f4_4f9c_bcd4_9c91f3a85256.slice. Sep 16 04:40:40.566271 systemd[1]: Created slice kubepods-besteffort-podea374977_c50e_40f3_900e_1191409caa19.slice - libcontainer container kubepods-besteffort-podea374977_c50e_40f3_900e_1191409caa19.slice. Sep 16 04:40:40.587456 systemd[1]: Created slice kubepods-besteffort-pod9764fd8b_561c_41e6_b1ba_67f352cd8f06.slice - libcontainer container kubepods-besteffort-pod9764fd8b_561c_41e6_b1ba_67f352cd8f06.slice. Sep 16 04:40:40.594672 kubelet[3330]: I0916 04:40:40.594332 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58ba772f-28f4-4f9c-bcd4-9c91f3a85256-config-volume\") pod \"coredns-668d6bf9bc-qvfrd\" (UID: \"58ba772f-28f4-4f9c-bcd4-9c91f3a85256\") " pod="kube-system/coredns-668d6bf9bc-qvfrd" Sep 16 04:40:40.595832 kubelet[3330]: I0916 04:40:40.595746 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9764fd8b-561c-41e6-b1ba-67f352cd8f06-calico-apiserver-certs\") pod \"calico-apiserver-58c96c794b-2psjs\" (UID: \"9764fd8b-561c-41e6-b1ba-67f352cd8f06\") " pod="calico-apiserver/calico-apiserver-58c96c794b-2psjs" Sep 16 04:40:40.595973 kubelet[3330]: I0916 04:40:40.595885 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea374977-c50e-40f3-900e-1191409caa19-calico-apiserver-certs\") pod \"calico-apiserver-58c96c794b-l57qh\" (UID: \"ea374977-c50e-40f3-900e-1191409caa19\") " pod="calico-apiserver/calico-apiserver-58c96c794b-l57qh" Sep 16 04:40:40.595973 kubelet[3330]: I0916 04:40:40.595936 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2m2g\" (UniqueName: \"kubernetes.io/projected/58ba772f-28f4-4f9c-bcd4-9c91f3a85256-kube-api-access-k2m2g\") pod \"coredns-668d6bf9bc-qvfrd\" (UID: \"58ba772f-28f4-4f9c-bcd4-9c91f3a85256\") " pod="kube-system/coredns-668d6bf9bc-qvfrd" Sep 16 04:40:40.596095 kubelet[3330]: I0916 04:40:40.596023 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8fp5\" (UniqueName: \"kubernetes.io/projected/9764fd8b-561c-41e6-b1ba-67f352cd8f06-kube-api-access-t8fp5\") pod \"calico-apiserver-58c96c794b-2psjs\" (UID: \"9764fd8b-561c-41e6-b1ba-67f352cd8f06\") " pod="calico-apiserver/calico-apiserver-58c96c794b-2psjs" Sep 16 04:40:40.596154 kubelet[3330]: I0916 04:40:40.596092 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f48t\" (UniqueName: \"kubernetes.io/projected/ea374977-c50e-40f3-900e-1191409caa19-kube-api-access-8f48t\") pod \"calico-apiserver-58c96c794b-l57qh\" (UID: \"ea374977-c50e-40f3-900e-1191409caa19\") " pod="calico-apiserver/calico-apiserver-58c96c794b-l57qh" Sep 16 04:40:40.596210 kubelet[3330]: I0916 04:40:40.596156 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c27947-78fe-4af7-b5ae-3ade9032d31b-config-volume\") pod \"coredns-668d6bf9bc-6j7l5\" (UID: \"20c27947-78fe-4af7-b5ae-3ade9032d31b\") " pod="kube-system/coredns-668d6bf9bc-6j7l5" Sep 16 04:40:40.596266 kubelet[3330]: I0916 04:40:40.596228 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj565\" (UniqueName: \"kubernetes.io/projected/20c27947-78fe-4af7-b5ae-3ade9032d31b-kube-api-access-sj565\") pod \"coredns-668d6bf9bc-6j7l5\" (UID: \"20c27947-78fe-4af7-b5ae-3ade9032d31b\") " pod="kube-system/coredns-668d6bf9bc-6j7l5" Sep 16 04:40:40.608376 systemd[1]: Created slice kubepods-burstable-pod20c27947_78fe_4af7_b5ae_3ade9032d31b.slice - libcontainer container kubepods-burstable-pod20c27947_78fe_4af7_b5ae_3ade9032d31b.slice. Sep 16 04:40:40.645536 systemd[1]: Created slice kubepods-besteffort-poda2a9dd72_8738_47f4_85d6_1505ef8e60dc.slice - libcontainer container kubepods-besteffort-poda2a9dd72_8738_47f4_85d6_1505ef8e60dc.slice. Sep 16 04:40:40.659225 systemd[1]: Created slice kubepods-besteffort-pod60fe9dd5_b719_4d62_b266_718ad94cdd4b.slice - libcontainer container kubepods-besteffort-pod60fe9dd5_b719_4d62_b266_718ad94cdd4b.slice. Sep 16 04:40:40.682574 systemd[1]: Created slice kubepods-besteffort-pod7330b840_14e1_402c_85da_147dbe5c0a4a.slice - libcontainer container kubepods-besteffort-pod7330b840_14e1_402c_85da_147dbe5c0a4a.slice. Sep 16 04:40:40.697746 kubelet[3330]: I0916 04:40:40.697690 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9tqx\" (UniqueName: \"kubernetes.io/projected/60fe9dd5-b719-4d62-b266-718ad94cdd4b-kube-api-access-v9tqx\") pod \"whisker-7c666ccbf4-hr8hl\" (UID: \"60fe9dd5-b719-4d62-b266-718ad94cdd4b\") " pod="calico-system/whisker-7c666ccbf4-hr8hl" Sep 16 04:40:40.698563 kubelet[3330]: I0916 04:40:40.698494 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60fe9dd5-b719-4d62-b266-718ad94cdd4b-whisker-ca-bundle\") pod \"whisker-7c666ccbf4-hr8hl\" (UID: \"60fe9dd5-b719-4d62-b266-718ad94cdd4b\") " pod="calico-system/whisker-7c666ccbf4-hr8hl" Sep 16 04:40:40.698757 kubelet[3330]: I0916 04:40:40.698617 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmlgj\" (UniqueName: \"kubernetes.io/projected/7330b840-14e1-402c-85da-147dbe5c0a4a-kube-api-access-qmlgj\") pod \"goldmane-54d579b49d-9bw49\" (UID: \"7330b840-14e1-402c-85da-147dbe5c0a4a\") " pod="calico-system/goldmane-54d579b49d-9bw49" Sep 16 04:40:40.699064 kubelet[3330]: I0916 04:40:40.698810 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w2gd\" (UniqueName: \"kubernetes.io/projected/a2a9dd72-8738-47f4-85d6-1505ef8e60dc-kube-api-access-5w2gd\") pod \"calico-kube-controllers-7778b7c887-7fncf\" (UID: \"a2a9dd72-8738-47f4-85d6-1505ef8e60dc\") " pod="calico-system/calico-kube-controllers-7778b7c887-7fncf" Sep 16 04:40:40.699163 kubelet[3330]: I0916 04:40:40.699082 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7330b840-14e1-402c-85da-147dbe5c0a4a-goldmane-key-pair\") pod \"goldmane-54d579b49d-9bw49\" (UID: \"7330b840-14e1-402c-85da-147dbe5c0a4a\") " pod="calico-system/goldmane-54d579b49d-9bw49" Sep 16 04:40:40.699950 kubelet[3330]: I0916 04:40:40.699874 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7330b840-14e1-402c-85da-147dbe5c0a4a-config\") pod \"goldmane-54d579b49d-9bw49\" (UID: \"7330b840-14e1-402c-85da-147dbe5c0a4a\") " pod="calico-system/goldmane-54d579b49d-9bw49" Sep 16 04:40:40.702763 kubelet[3330]: I0916 04:40:40.702688 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60fe9dd5-b719-4d62-b266-718ad94cdd4b-whisker-backend-key-pair\") pod \"whisker-7c666ccbf4-hr8hl\" (UID: \"60fe9dd5-b719-4d62-b266-718ad94cdd4b\") " pod="calico-system/whisker-7c666ccbf4-hr8hl" Sep 16 04:40:40.702960 kubelet[3330]: I0916 04:40:40.702842 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2a9dd72-8738-47f4-85d6-1505ef8e60dc-tigera-ca-bundle\") pod \"calico-kube-controllers-7778b7c887-7fncf\" (UID: \"a2a9dd72-8738-47f4-85d6-1505ef8e60dc\") " pod="calico-system/calico-kube-controllers-7778b7c887-7fncf" Sep 16 04:40:40.702960 kubelet[3330]: I0916 04:40:40.702932 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7330b840-14e1-402c-85da-147dbe5c0a4a-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-9bw49\" (UID: \"7330b840-14e1-402c-85da-147dbe5c0a4a\") " pod="calico-system/goldmane-54d579b49d-9bw49" Sep 16 04:40:40.851349 containerd[2009]: time="2025-09-16T04:40:40.851297576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qvfrd,Uid:58ba772f-28f4-4f9c-bcd4-9c91f3a85256,Namespace:kube-system,Attempt:0,}" Sep 16 04:40:40.881996 containerd[2009]: time="2025-09-16T04:40:40.881939888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c96c794b-l57qh,Uid:ea374977-c50e-40f3-900e-1191409caa19,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:40:40.905351 containerd[2009]: time="2025-09-16T04:40:40.905211368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c96c794b-2psjs,Uid:9764fd8b-561c-41e6-b1ba-67f352cd8f06,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:40:40.929164 containerd[2009]: time="2025-09-16T04:40:40.928916924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6j7l5,Uid:20c27947-78fe-4af7-b5ae-3ade9032d31b,Namespace:kube-system,Attempt:0,}" Sep 16 04:40:40.954786 containerd[2009]: time="2025-09-16T04:40:40.954739124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7778b7c887-7fncf,Uid:a2a9dd72-8738-47f4-85d6-1505ef8e60dc,Namespace:calico-system,Attempt:0,}" Sep 16 04:40:40.975858 containerd[2009]: time="2025-09-16T04:40:40.974268248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c666ccbf4-hr8hl,Uid:60fe9dd5-b719-4d62-b266-718ad94cdd4b,Namespace:calico-system,Attempt:0,}" Sep 16 04:40:40.985013 systemd[1]: Created slice kubepods-besteffort-pod838fa562_42c6_4d1d_a9fb_07d8f39bc7c5.slice - libcontainer container kubepods-besteffort-pod838fa562_42c6_4d1d_a9fb_07d8f39bc7c5.slice. Sep 16 04:40:40.991133 containerd[2009]: time="2025-09-16T04:40:40.991038332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-9bw49,Uid:7330b840-14e1-402c-85da-147dbe5c0a4a,Namespace:calico-system,Attempt:0,}" Sep 16 04:40:40.993034 containerd[2009]: time="2025-09-16T04:40:40.992838824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cp4p4,Uid:838fa562-42c6-4d1d-a9fb-07d8f39bc7c5,Namespace:calico-system,Attempt:0,}" Sep 16 04:40:41.317563 containerd[2009]: time="2025-09-16T04:40:41.316725726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 16 04:40:41.494068 containerd[2009]: time="2025-09-16T04:40:41.493814695Z" level=error msg="Failed to destroy network for sandbox \"865e6b26d8a627d2ca3efb5e8f5fb3c59463f93b405353108954c7592a2706af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.500902 systemd[1]: run-netns-cni\x2d0124fda6\x2dd8dd\x2d3a22\x2d1d7f\x2d243998f3aa03.mount: Deactivated successfully. Sep 16 04:40:41.509336 containerd[2009]: time="2025-09-16T04:40:41.509244271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7778b7c887-7fncf,Uid:a2a9dd72-8738-47f4-85d6-1505ef8e60dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"865e6b26d8a627d2ca3efb5e8f5fb3c59463f93b405353108954c7592a2706af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.510610 kubelet[3330]: E0916 04:40:41.510524 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"865e6b26d8a627d2ca3efb5e8f5fb3c59463f93b405353108954c7592a2706af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.512319 kubelet[3330]: E0916 04:40:41.511823 3330 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"865e6b26d8a627d2ca3efb5e8f5fb3c59463f93b405353108954c7592a2706af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7778b7c887-7fncf" Sep 16 04:40:41.512319 kubelet[3330]: E0916 04:40:41.511901 3330 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"865e6b26d8a627d2ca3efb5e8f5fb3c59463f93b405353108954c7592a2706af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7778b7c887-7fncf" Sep 16 04:40:41.512319 kubelet[3330]: E0916 04:40:41.512030 3330 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7778b7c887-7fncf_calico-system(a2a9dd72-8738-47f4-85d6-1505ef8e60dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7778b7c887-7fncf_calico-system(a2a9dd72-8738-47f4-85d6-1505ef8e60dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"865e6b26d8a627d2ca3efb5e8f5fb3c59463f93b405353108954c7592a2706af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7778b7c887-7fncf" podUID="a2a9dd72-8738-47f4-85d6-1505ef8e60dc" Sep 16 04:40:41.528206 containerd[2009]: time="2025-09-16T04:40:41.528146791Z" level=error msg="Failed to destroy network for sandbox \"835b5c6d9c5f1dff112a2a031e88887e4797b7dc8aa964f0d0be7678c025ab0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.528826 containerd[2009]: time="2025-09-16T04:40:41.528518767Z" level=error msg="Failed to destroy network for sandbox \"b501e63bc92d2ca668bea590e195a759c5cfbdc4877439c730ca0fe5b0f709ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.534545 systemd[1]: run-netns-cni\x2d6b08db9d\x2dad6d\x2d7f88\x2d24f9\x2d5a61523a87ed.mount: Deactivated successfully. Sep 16 04:40:41.536006 systemd[1]: run-netns-cni\x2dfdd7743e\x2de77b\x2d3034\x2db84b\x2df76eaf7b2d6d.mount: Deactivated successfully. Sep 16 04:40:41.537850 containerd[2009]: time="2025-09-16T04:40:41.536743699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c96c794b-l57qh,Uid:ea374977-c50e-40f3-900e-1191409caa19,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"835b5c6d9c5f1dff112a2a031e88887e4797b7dc8aa964f0d0be7678c025ab0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.546225 kubelet[3330]: E0916 04:40:41.541837 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"835b5c6d9c5f1dff112a2a031e88887e4797b7dc8aa964f0d0be7678c025ab0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.546225 kubelet[3330]: E0916 04:40:41.541919 3330 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"835b5c6d9c5f1dff112a2a031e88887e4797b7dc8aa964f0d0be7678c025ab0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58c96c794b-l57qh" Sep 16 04:40:41.546225 kubelet[3330]: E0916 04:40:41.541951 3330 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"835b5c6d9c5f1dff112a2a031e88887e4797b7dc8aa964f0d0be7678c025ab0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58c96c794b-l57qh" Sep 16 04:40:41.546597 containerd[2009]: time="2025-09-16T04:40:41.544789387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c96c794b-2psjs,Uid:9764fd8b-561c-41e6-b1ba-67f352cd8f06,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b501e63bc92d2ca668bea590e195a759c5cfbdc4877439c730ca0fe5b0f709ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.546750 kubelet[3330]: E0916 04:40:41.542016 3330 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58c96c794b-l57qh_calico-apiserver(ea374977-c50e-40f3-900e-1191409caa19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58c96c794b-l57qh_calico-apiserver(ea374977-c50e-40f3-900e-1191409caa19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"835b5c6d9c5f1dff112a2a031e88887e4797b7dc8aa964f0d0be7678c025ab0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58c96c794b-l57qh" podUID="ea374977-c50e-40f3-900e-1191409caa19" Sep 16 04:40:41.548201 kubelet[3330]: E0916 04:40:41.547404 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b501e63bc92d2ca668bea590e195a759c5cfbdc4877439c730ca0fe5b0f709ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.548201 kubelet[3330]: E0916 04:40:41.547535 3330 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b501e63bc92d2ca668bea590e195a759c5cfbdc4877439c730ca0fe5b0f709ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58c96c794b-2psjs" Sep 16 04:40:41.548201 kubelet[3330]: E0916 04:40:41.547743 3330 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b501e63bc92d2ca668bea590e195a759c5cfbdc4877439c730ca0fe5b0f709ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58c96c794b-2psjs" Sep 16 04:40:41.548439 kubelet[3330]: E0916 04:40:41.547986 3330 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58c96c794b-2psjs_calico-apiserver(9764fd8b-561c-41e6-b1ba-67f352cd8f06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58c96c794b-2psjs_calico-apiserver(9764fd8b-561c-41e6-b1ba-67f352cd8f06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b501e63bc92d2ca668bea590e195a759c5cfbdc4877439c730ca0fe5b0f709ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58c96c794b-2psjs" podUID="9764fd8b-561c-41e6-b1ba-67f352cd8f06" Sep 16 04:40:41.570772 containerd[2009]: time="2025-09-16T04:40:41.570414703Z" level=error msg="Failed to destroy network for sandbox \"b29cfd2ad548756fc1c732348895d7a7047bc41ce46b6b5654179d99d70c8224\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.576553 systemd[1]: run-netns-cni\x2dfe59caa5\x2d117f\x2d0aac\x2dad57\x2d3d74fa7aa61e.mount: Deactivated successfully. Sep 16 04:40:41.582026 containerd[2009]: time="2025-09-16T04:40:41.581264263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qvfrd,Uid:58ba772f-28f4-4f9c-bcd4-9c91f3a85256,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29cfd2ad548756fc1c732348895d7a7047bc41ce46b6b5654179d99d70c8224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.582225 kubelet[3330]: E0916 04:40:41.581676 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29cfd2ad548756fc1c732348895d7a7047bc41ce46b6b5654179d99d70c8224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.582225 kubelet[3330]: E0916 04:40:41.581772 3330 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29cfd2ad548756fc1c732348895d7a7047bc41ce46b6b5654179d99d70c8224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qvfrd" Sep 16 04:40:41.582225 kubelet[3330]: E0916 04:40:41.581807 3330 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29cfd2ad548756fc1c732348895d7a7047bc41ce46b6b5654179d99d70c8224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qvfrd" Sep 16 04:40:41.583655 kubelet[3330]: E0916 04:40:41.581942 3330 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qvfrd_kube-system(58ba772f-28f4-4f9c-bcd4-9c91f3a85256)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qvfrd_kube-system(58ba772f-28f4-4f9c-bcd4-9c91f3a85256)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b29cfd2ad548756fc1c732348895d7a7047bc41ce46b6b5654179d99d70c8224\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qvfrd" podUID="58ba772f-28f4-4f9c-bcd4-9c91f3a85256" Sep 16 04:40:41.587879 containerd[2009]: time="2025-09-16T04:40:41.585534979Z" level=error msg="Failed to destroy network for sandbox \"03f7945c2f39cc20e774b673fdce8a85b801fba7df5d64a883cda75882dc7d9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.593903 containerd[2009]: time="2025-09-16T04:40:41.593822923Z" level=error msg="Failed to destroy network for sandbox \"a19c9ade842c106e584c7229bcd9b559bae42577b093ff80b92e97e96b2b296b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.595503 containerd[2009]: time="2025-09-16T04:40:41.594923455Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6j7l5,Uid:20c27947-78fe-4af7-b5ae-3ade9032d31b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f7945c2f39cc20e774b673fdce8a85b801fba7df5d64a883cda75882dc7d9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.595995 kubelet[3330]: E0916 04:40:41.595853 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f7945c2f39cc20e774b673fdce8a85b801fba7df5d64a883cda75882dc7d9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.595995 kubelet[3330]: E0916 04:40:41.595929 3330 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f7945c2f39cc20e774b673fdce8a85b801fba7df5d64a883cda75882dc7d9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6j7l5" Sep 16 04:40:41.595995 kubelet[3330]: E0916 04:40:41.595961 3330 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f7945c2f39cc20e774b673fdce8a85b801fba7df5d64a883cda75882dc7d9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6j7l5" Sep 16 04:40:41.596184 kubelet[3330]: E0916 04:40:41.596021 3330 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6j7l5_kube-system(20c27947-78fe-4af7-b5ae-3ade9032d31b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6j7l5_kube-system(20c27947-78fe-4af7-b5ae-3ade9032d31b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03f7945c2f39cc20e774b673fdce8a85b801fba7df5d64a883cda75882dc7d9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6j7l5" podUID="20c27947-78fe-4af7-b5ae-3ade9032d31b" Sep 16 04:40:41.599095 containerd[2009]: time="2025-09-16T04:40:41.596927611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-9bw49,Uid:7330b840-14e1-402c-85da-147dbe5c0a4a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a19c9ade842c106e584c7229bcd9b559bae42577b093ff80b92e97e96b2b296b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.599312 kubelet[3330]: E0916 04:40:41.598894 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a19c9ade842c106e584c7229bcd9b559bae42577b093ff80b92e97e96b2b296b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.599312 kubelet[3330]: E0916 04:40:41.598982 3330 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a19c9ade842c106e584c7229bcd9b559bae42577b093ff80b92e97e96b2b296b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-9bw49" Sep 16 04:40:41.599312 kubelet[3330]: E0916 04:40:41.599037 3330 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a19c9ade842c106e584c7229bcd9b559bae42577b093ff80b92e97e96b2b296b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-9bw49" Sep 16 04:40:41.600897 kubelet[3330]: E0916 04:40:41.600788 3330 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-9bw49_calico-system(7330b840-14e1-402c-85da-147dbe5c0a4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-9bw49_calico-system(7330b840-14e1-402c-85da-147dbe5c0a4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a19c9ade842c106e584c7229bcd9b559bae42577b093ff80b92e97e96b2b296b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-9bw49" podUID="7330b840-14e1-402c-85da-147dbe5c0a4a" Sep 16 04:40:41.609254 containerd[2009]: time="2025-09-16T04:40:41.608979788Z" level=error msg="Failed to destroy network for sandbox \"3c371582dbe5bf77021a6d4f74e6d2b6779e44763660c27434db12b232754211\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.612266 containerd[2009]: time="2025-09-16T04:40:41.612166064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c666ccbf4-hr8hl,Uid:60fe9dd5-b719-4d62-b266-718ad94cdd4b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c371582dbe5bf77021a6d4f74e6d2b6779e44763660c27434db12b232754211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.613455 kubelet[3330]: E0916 04:40:41.613365 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c371582dbe5bf77021a6d4f74e6d2b6779e44763660c27434db12b232754211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.613589 kubelet[3330]: E0916 04:40:41.613449 3330 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c371582dbe5bf77021a6d4f74e6d2b6779e44763660c27434db12b232754211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c666ccbf4-hr8hl" Sep 16 04:40:41.613589 kubelet[3330]: E0916 04:40:41.613498 3330 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c371582dbe5bf77021a6d4f74e6d2b6779e44763660c27434db12b232754211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c666ccbf4-hr8hl" Sep 16 04:40:41.613884 kubelet[3330]: E0916 04:40:41.613575 3330 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7c666ccbf4-hr8hl_calico-system(60fe9dd5-b719-4d62-b266-718ad94cdd4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7c666ccbf4-hr8hl_calico-system(60fe9dd5-b719-4d62-b266-718ad94cdd4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c371582dbe5bf77021a6d4f74e6d2b6779e44763660c27434db12b232754211\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c666ccbf4-hr8hl" podUID="60fe9dd5-b719-4d62-b266-718ad94cdd4b" Sep 16 04:40:41.614385 containerd[2009]: time="2025-09-16T04:40:41.614316380Z" level=error msg="Failed to destroy network for sandbox \"db4bc93bd37bc52a8c0fba1c962e267d3eda2e1eba57a92bdeb2b5a0c489be5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.617587 containerd[2009]: time="2025-09-16T04:40:41.617503916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cp4p4,Uid:838fa562-42c6-4d1d-a9fb-07d8f39bc7c5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db4bc93bd37bc52a8c0fba1c962e267d3eda2e1eba57a92bdeb2b5a0c489be5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.618326 kubelet[3330]: E0916 04:40:41.618202 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db4bc93bd37bc52a8c0fba1c962e267d3eda2e1eba57a92bdeb2b5a0c489be5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:40:41.618807 kubelet[3330]: E0916 04:40:41.618558 3330 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db4bc93bd37bc52a8c0fba1c962e267d3eda2e1eba57a92bdeb2b5a0c489be5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cp4p4" Sep 16 04:40:41.618807 kubelet[3330]: E0916 04:40:41.618678 3330 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db4bc93bd37bc52a8c0fba1c962e267d3eda2e1eba57a92bdeb2b5a0c489be5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cp4p4" Sep 16 04:40:41.619120 kubelet[3330]: E0916 04:40:41.618774 3330 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cp4p4_calico-system(838fa562-42c6-4d1d-a9fb-07d8f39bc7c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cp4p4_calico-system(838fa562-42c6-4d1d-a9fb-07d8f39bc7c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db4bc93bd37bc52a8c0fba1c962e267d3eda2e1eba57a92bdeb2b5a0c489be5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cp4p4" podUID="838fa562-42c6-4d1d-a9fb-07d8f39bc7c5" Sep 16 04:40:42.438699 systemd[1]: run-netns-cni\x2d685f1fde\x2d95e6\x2d755c\x2db03e\x2d664515d34aea.mount: Deactivated successfully. Sep 16 04:40:42.438853 systemd[1]: run-netns-cni\x2df8820541\x2d4689\x2de28e\x2d76ea\x2df6abad189b17.mount: Deactivated successfully. Sep 16 04:40:42.439002 systemd[1]: run-netns-cni\x2d5182fc02\x2d0a0b\x2d8904\x2dbe3d\x2df7afdaef82a1.mount: Deactivated successfully. Sep 16 04:40:42.439126 systemd[1]: run-netns-cni\x2d7ae38c16\x2dc5d6\x2d7f28\x2d8c65\x2dc1e2d7e5da63.mount: Deactivated successfully. Sep 16 04:40:49.198155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3524979207.mount: Deactivated successfully. Sep 16 04:40:49.245524 containerd[2009]: time="2025-09-16T04:40:49.245361913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:49.247363 containerd[2009]: time="2025-09-16T04:40:49.247309777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 16 04:40:49.247788 containerd[2009]: time="2025-09-16T04:40:49.247678273Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:49.250695 containerd[2009]: time="2025-09-16T04:40:49.250547545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:49.252109 containerd[2009]: time="2025-09-16T04:40:49.251887141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 7.934066967s" Sep 16 04:40:49.252109 containerd[2009]: time="2025-09-16T04:40:49.251945101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 16 04:40:49.280705 containerd[2009]: time="2025-09-16T04:40:49.280540586Z" level=info msg="CreateContainer within sandbox \"91dc171900f8030a80dcd0163ed967e76d1dbfe073249592569d53dbb87650b7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 16 04:40:49.304676 containerd[2009]: time="2025-09-16T04:40:49.304272218Z" level=info msg="Container 6fe0c3d289715b04f84c366060a60aa57bc1dd53cd8d63f525b8c8283ac8563c: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:40:49.314794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2805252768.mount: Deactivated successfully. Sep 16 04:40:49.331511 containerd[2009]: time="2025-09-16T04:40:49.331414898Z" level=info msg="CreateContainer within sandbox \"91dc171900f8030a80dcd0163ed967e76d1dbfe073249592569d53dbb87650b7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6fe0c3d289715b04f84c366060a60aa57bc1dd53cd8d63f525b8c8283ac8563c\"" Sep 16 04:40:49.335840 containerd[2009]: time="2025-09-16T04:40:49.335475038Z" level=info msg="StartContainer for \"6fe0c3d289715b04f84c366060a60aa57bc1dd53cd8d63f525b8c8283ac8563c\"" Sep 16 04:40:49.342582 containerd[2009]: time="2025-09-16T04:40:49.342501542Z" level=info msg="connecting to shim 6fe0c3d289715b04f84c366060a60aa57bc1dd53cd8d63f525b8c8283ac8563c" address="unix:///run/containerd/s/c68ace1eefb7b43cfd0a06e29feae8cc67fd104ca4a0391cb38114cd38dbf72f" protocol=ttrpc version=3 Sep 16 04:40:49.390968 systemd[1]: Started cri-containerd-6fe0c3d289715b04f84c366060a60aa57bc1dd53cd8d63f525b8c8283ac8563c.scope - libcontainer container 6fe0c3d289715b04f84c366060a60aa57bc1dd53cd8d63f525b8c8283ac8563c. Sep 16 04:40:49.484016 containerd[2009]: time="2025-09-16T04:40:49.483868683Z" level=info msg="StartContainer for \"6fe0c3d289715b04f84c366060a60aa57bc1dd53cd8d63f525b8c8283ac8563c\" returns successfully" Sep 16 04:40:49.735387 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 16 04:40:49.735549 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 16 04:40:50.095162 kubelet[3330]: I0916 04:40:50.095101 3330 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60fe9dd5-b719-4d62-b266-718ad94cdd4b-whisker-ca-bundle\") pod \"60fe9dd5-b719-4d62-b266-718ad94cdd4b\" (UID: \"60fe9dd5-b719-4d62-b266-718ad94cdd4b\") " Sep 16 04:40:50.097297 kubelet[3330]: I0916 04:40:50.095179 3330 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9tqx\" (UniqueName: \"kubernetes.io/projected/60fe9dd5-b719-4d62-b266-718ad94cdd4b-kube-api-access-v9tqx\") pod \"60fe9dd5-b719-4d62-b266-718ad94cdd4b\" (UID: \"60fe9dd5-b719-4d62-b266-718ad94cdd4b\") " Sep 16 04:40:50.097297 kubelet[3330]: I0916 04:40:50.095261 3330 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60fe9dd5-b719-4d62-b266-718ad94cdd4b-whisker-backend-key-pair\") pod \"60fe9dd5-b719-4d62-b266-718ad94cdd4b\" (UID: \"60fe9dd5-b719-4d62-b266-718ad94cdd4b\") " Sep 16 04:40:50.104676 kubelet[3330]: I0916 04:40:50.103724 3330 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60fe9dd5-b719-4d62-b266-718ad94cdd4b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "60fe9dd5-b719-4d62-b266-718ad94cdd4b" (UID: "60fe9dd5-b719-4d62-b266-718ad94cdd4b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 04:40:50.113523 kubelet[3330]: I0916 04:40:50.113431 3330 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60fe9dd5-b719-4d62-b266-718ad94cdd4b-kube-api-access-v9tqx" (OuterVolumeSpecName: "kube-api-access-v9tqx") pod "60fe9dd5-b719-4d62-b266-718ad94cdd4b" (UID: "60fe9dd5-b719-4d62-b266-718ad94cdd4b"). InnerVolumeSpecName "kube-api-access-v9tqx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:40:50.113976 kubelet[3330]: I0916 04:40:50.113911 3330 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fe9dd5-b719-4d62-b266-718ad94cdd4b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "60fe9dd5-b719-4d62-b266-718ad94cdd4b" (UID: "60fe9dd5-b719-4d62-b266-718ad94cdd4b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 04:40:50.197617 kubelet[3330]: I0916 04:40:50.196263 3330 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60fe9dd5-b719-4d62-b266-718ad94cdd4b-whisker-backend-key-pair\") on node \"ip-172-31-31-59\" DevicePath \"\"" Sep 16 04:40:50.197617 kubelet[3330]: I0916 04:40:50.196313 3330 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60fe9dd5-b719-4d62-b266-718ad94cdd4b-whisker-ca-bundle\") on node \"ip-172-31-31-59\" DevicePath \"\"" Sep 16 04:40:50.197617 kubelet[3330]: I0916 04:40:50.196336 3330 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v9tqx\" (UniqueName: \"kubernetes.io/projected/60fe9dd5-b719-4d62-b266-718ad94cdd4b-kube-api-access-v9tqx\") on node \"ip-172-31-31-59\" DevicePath \"\"" Sep 16 04:40:50.201879 systemd[1]: var-lib-kubelet-pods-60fe9dd5\x2db719\x2d4d62\x2db266\x2d718ad94cdd4b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv9tqx.mount: Deactivated successfully. Sep 16 04:40:50.202195 systemd[1]: var-lib-kubelet-pods-60fe9dd5\x2db719\x2d4d62\x2db266\x2d718ad94cdd4b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 16 04:40:50.390013 systemd[1]: Removed slice kubepods-besteffort-pod60fe9dd5_b719_4d62_b266_718ad94cdd4b.slice - libcontainer container kubepods-besteffort-pod60fe9dd5_b719_4d62_b266_718ad94cdd4b.slice. Sep 16 04:40:50.438411 kubelet[3330]: I0916 04:40:50.437729 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8qzkj" podStartSLOduration=2.438516572 podStartE2EDuration="21.437700639s" podCreationTimestamp="2025-09-16 04:40:29 +0000 UTC" firstStartedPulling="2025-09-16 04:40:30.254482711 +0000 UTC m=+32.528872482" lastFinishedPulling="2025-09-16 04:40:49.253666778 +0000 UTC m=+51.528056549" observedRunningTime="2025-09-16 04:40:50.436991379 +0000 UTC m=+52.711381246" watchObservedRunningTime="2025-09-16 04:40:50.437700639 +0000 UTC m=+52.712090494" Sep 16 04:40:50.551485 systemd[1]: Created slice kubepods-besteffort-podc065da56_bba4_445e_9f5d_e716a1404359.slice - libcontainer container kubepods-besteffort-podc065da56_bba4_445e_9f5d_e716a1404359.slice. Sep 16 04:40:50.600869 kubelet[3330]: I0916 04:40:50.600795 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c065da56-bba4-445e-9f5d-e716a1404359-whisker-ca-bundle\") pod \"whisker-ccf47dddf-hbcpq\" (UID: \"c065da56-bba4-445e-9f5d-e716a1404359\") " pod="calico-system/whisker-ccf47dddf-hbcpq" Sep 16 04:40:50.601041 kubelet[3330]: I0916 04:40:50.600877 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxpsj\" (UniqueName: \"kubernetes.io/projected/c065da56-bba4-445e-9f5d-e716a1404359-kube-api-access-rxpsj\") pod \"whisker-ccf47dddf-hbcpq\" (UID: \"c065da56-bba4-445e-9f5d-e716a1404359\") " pod="calico-system/whisker-ccf47dddf-hbcpq" Sep 16 04:40:50.601041 kubelet[3330]: I0916 04:40:50.600931 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c065da56-bba4-445e-9f5d-e716a1404359-whisker-backend-key-pair\") pod \"whisker-ccf47dddf-hbcpq\" (UID: \"c065da56-bba4-445e-9f5d-e716a1404359\") " pod="calico-system/whisker-ccf47dddf-hbcpq" Sep 16 04:40:50.675540 containerd[2009]: time="2025-09-16T04:40:50.674876153Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fe0c3d289715b04f84c366060a60aa57bc1dd53cd8d63f525b8c8283ac8563c\" id:\"c56eb1fd79b826b1a1dcb8e91e185f03ee53d6ab01761ff0759ffb2ecc29660f\" pid:4587 exit_status:1 exited_at:{seconds:1757997650 nanos:674129057}" Sep 16 04:40:50.860867 containerd[2009]: time="2025-09-16T04:40:50.860796641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ccf47dddf-hbcpq,Uid:c065da56-bba4-445e-9f5d-e716a1404359,Namespace:calico-system,Attempt:0,}" Sep 16 04:40:51.191139 (udev-worker)[4547]: Network interface NamePolicy= disabled on kernel command line. Sep 16 04:40:51.193796 systemd-networkd[1898]: calicf66b4c3eaf: Link UP Sep 16 04:40:51.195535 systemd-networkd[1898]: calicf66b4c3eaf: Gained carrier Sep 16 04:40:51.229886 containerd[2009]: 2025-09-16 04:40:50.908 [INFO][4601] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:40:51.229886 containerd[2009]: 2025-09-16 04:40:51.014 [INFO][4601] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--59-k8s-whisker--ccf47dddf--hbcpq-eth0 whisker-ccf47dddf- calico-system c065da56-bba4-445e-9f5d-e716a1404359 903 0 2025-09-16 04:40:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:ccf47dddf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-31-59 whisker-ccf47dddf-hbcpq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicf66b4c3eaf [] [] }} ContainerID="134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" Namespace="calico-system" Pod="whisker-ccf47dddf-hbcpq" WorkloadEndpoint="ip--172--31--31--59-k8s-whisker--ccf47dddf--hbcpq-" Sep 16 04:40:51.229886 containerd[2009]: 2025-09-16 04:40:51.014 [INFO][4601] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" Namespace="calico-system" Pod="whisker-ccf47dddf-hbcpq" WorkloadEndpoint="ip--172--31--31--59-k8s-whisker--ccf47dddf--hbcpq-eth0" Sep 16 04:40:51.229886 containerd[2009]: 2025-09-16 04:40:51.095 [INFO][4613] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" HandleID="k8s-pod-network.134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" Workload="ip--172--31--31--59-k8s-whisker--ccf47dddf--hbcpq-eth0" Sep 16 04:40:51.230444 containerd[2009]: 2025-09-16 04:40:51.096 [INFO][4613] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" HandleID="k8s-pod-network.134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" Workload="ip--172--31--31--59-k8s-whisker--ccf47dddf--hbcpq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038de30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-59", "pod":"whisker-ccf47dddf-hbcpq", "timestamp":"2025-09-16 04:40:51.095864655 +0000 UTC"}, Hostname:"ip-172-31-31-59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:40:51.230444 containerd[2009]: 2025-09-16 04:40:51.096 [INFO][4613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:40:51.230444 containerd[2009]: 2025-09-16 04:40:51.096 [INFO][4613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:40:51.230444 containerd[2009]: 2025-09-16 04:40:51.096 [INFO][4613] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-59' Sep 16 04:40:51.230444 containerd[2009]: 2025-09-16 04:40:51.116 [INFO][4613] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" host="ip-172-31-31-59" Sep 16 04:40:51.230444 containerd[2009]: 2025-09-16 04:40:51.125 [INFO][4613] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-59" Sep 16 04:40:51.230444 containerd[2009]: 2025-09-16 04:40:51.134 [INFO][4613] ipam/ipam.go 511: Trying affinity for 192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:51.230444 containerd[2009]: 2025-09-16 04:40:51.139 [INFO][4613] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:51.230444 containerd[2009]: 2025-09-16 04:40:51.142 [INFO][4613] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:51.230444 containerd[2009]: 2025-09-16 04:40:51.142 [INFO][4613] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" host="ip-172-31-31-59" Sep 16 04:40:51.231005 containerd[2009]: 2025-09-16 04:40:51.145 [INFO][4613] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e Sep 16 04:40:51.231005 containerd[2009]: 2025-09-16 04:40:51.155 [INFO][4613] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" host="ip-172-31-31-59" Sep 16 04:40:51.231005 containerd[2009]: 2025-09-16 04:40:51.167 [INFO][4613] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.36.65/26] block=192.168.36.64/26 handle="k8s-pod-network.134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" host="ip-172-31-31-59" Sep 16 04:40:51.231005 containerd[2009]: 2025-09-16 04:40:51.167 [INFO][4613] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.65/26] handle="k8s-pod-network.134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" host="ip-172-31-31-59" Sep 16 04:40:51.231005 containerd[2009]: 2025-09-16 04:40:51.167 [INFO][4613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:40:51.231005 containerd[2009]: 2025-09-16 04:40:51.167 [INFO][4613] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.65/26] IPv6=[] ContainerID="134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" HandleID="k8s-pod-network.134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" Workload="ip--172--31--31--59-k8s-whisker--ccf47dddf--hbcpq-eth0" Sep 16 04:40:51.231286 containerd[2009]: 2025-09-16 04:40:51.178 [INFO][4601] cni-plugin/k8s.go 418: Populated endpoint ContainerID="134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" Namespace="calico-system" Pod="whisker-ccf47dddf-hbcpq" WorkloadEndpoint="ip--172--31--31--59-k8s-whisker--ccf47dddf--hbcpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--59-k8s-whisker--ccf47dddf--hbcpq-eth0", GenerateName:"whisker-ccf47dddf-", Namespace:"calico-system", SelfLink:"", UID:"c065da56-bba4-445e-9f5d-e716a1404359", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 40, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"ccf47dddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-59", ContainerID:"", Pod:"whisker-ccf47dddf-hbcpq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.36.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicf66b4c3eaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:40:51.231286 containerd[2009]: 2025-09-16 04:40:51.179 [INFO][4601] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.65/32] ContainerID="134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" Namespace="calico-system" Pod="whisker-ccf47dddf-hbcpq" WorkloadEndpoint="ip--172--31--31--59-k8s-whisker--ccf47dddf--hbcpq-eth0" Sep 16 04:40:51.231441 containerd[2009]: 2025-09-16 04:40:51.179 [INFO][4601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf66b4c3eaf ContainerID="134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" Namespace="calico-system" Pod="whisker-ccf47dddf-hbcpq" WorkloadEndpoint="ip--172--31--31--59-k8s-whisker--ccf47dddf--hbcpq-eth0" Sep 16 04:40:51.231441 containerd[2009]: 2025-09-16 04:40:51.195 [INFO][4601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" Namespace="calico-system" Pod="whisker-ccf47dddf-hbcpq" WorkloadEndpoint="ip--172--31--31--59-k8s-whisker--ccf47dddf--hbcpq-eth0" Sep 16 04:40:51.231538 containerd[2009]: 2025-09-16 04:40:51.196 [INFO][4601] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" Namespace="calico-system" Pod="whisker-ccf47dddf-hbcpq" WorkloadEndpoint="ip--172--31--31--59-k8s-whisker--ccf47dddf--hbcpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--59-k8s-whisker--ccf47dddf--hbcpq-eth0", GenerateName:"whisker-ccf47dddf-", Namespace:"calico-system", SelfLink:"", UID:"c065da56-bba4-445e-9f5d-e716a1404359", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 40, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"ccf47dddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-59", ContainerID:"134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e", Pod:"whisker-ccf47dddf-hbcpq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.36.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicf66b4c3eaf", MAC:"9a:b8:73:6c:9c:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:40:51.231769 containerd[2009]: 2025-09-16 04:40:51.225 [INFO][4601] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" Namespace="calico-system" Pod="whisker-ccf47dddf-hbcpq" WorkloadEndpoint="ip--172--31--31--59-k8s-whisker--ccf47dddf--hbcpq-eth0" Sep 16 04:40:51.265271 containerd[2009]: time="2025-09-16T04:40:51.265076775Z" level=info msg="connecting to shim 134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e" address="unix:///run/containerd/s/a1fba73a3881d4e7c305fe72b14c35e7f8b046e50c817142060d82226956d4a4" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:40:51.314959 systemd[1]: Started cri-containerd-134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e.scope - libcontainer container 134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e. Sep 16 04:40:51.451625 containerd[2009]: time="2025-09-16T04:40:51.451032028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ccf47dddf-hbcpq,Uid:c065da56-bba4-445e-9f5d-e716a1404359,Namespace:calico-system,Attempt:0,} returns sandbox id \"134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e\"" Sep 16 04:40:51.459061 containerd[2009]: time="2025-09-16T04:40:51.458710648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 16 04:40:51.973760 containerd[2009]: time="2025-09-16T04:40:51.972943003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c96c794b-2psjs,Uid:9764fd8b-561c-41e6-b1ba-67f352cd8f06,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:40:51.983692 kubelet[3330]: I0916 04:40:51.981577 3330 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60fe9dd5-b719-4d62-b266-718ad94cdd4b" path="/var/lib/kubelet/pods/60fe9dd5-b719-4d62-b266-718ad94cdd4b/volumes" Sep 16 04:40:52.380562 systemd[1]: Started sshd@7-172.31.31.59:22-147.75.109.163:32828.service - OpenSSH per-connection server daemon (147.75.109.163:32828). Sep 16 04:40:52.426944 systemd-networkd[1898]: cali05b63e36290: Link UP Sep 16 04:40:52.428802 systemd-networkd[1898]: cali05b63e36290: Gained carrier Sep 16 04:40:52.540862 containerd[2009]: 2025-09-16 04:40:52.092 [INFO][4781] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:40:52.540862 containerd[2009]: 2025-09-16 04:40:52.160 [INFO][4781] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--2psjs-eth0 calico-apiserver-58c96c794b- calico-apiserver 9764fd8b-561c-41e6-b1ba-67f352cd8f06 826 0 2025-09-16 04:40:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58c96c794b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-59 calico-apiserver-58c96c794b-2psjs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali05b63e36290 [] [] }} ContainerID="e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" Namespace="calico-apiserver" Pod="calico-apiserver-58c96c794b-2psjs" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--2psjs-" Sep 16 04:40:52.540862 containerd[2009]: 2025-09-16 04:40:52.160 [INFO][4781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" Namespace="calico-apiserver" Pod="calico-apiserver-58c96c794b-2psjs" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--2psjs-eth0" Sep 16 04:40:52.540862 containerd[2009]: 2025-09-16 04:40:52.232 [INFO][4793] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" HandleID="k8s-pod-network.e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" Workload="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--2psjs-eth0" Sep 16 04:40:52.541233 containerd[2009]: 2025-09-16 04:40:52.232 [INFO][4793] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" HandleID="k8s-pod-network.e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" Workload="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--2psjs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024ba50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-59", "pod":"calico-apiserver-58c96c794b-2psjs", "timestamp":"2025-09-16 04:40:52.232051192 +0000 UTC"}, Hostname:"ip-172-31-31-59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:40:52.541233 containerd[2009]: 2025-09-16 04:40:52.232 [INFO][4793] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:40:52.541233 containerd[2009]: 2025-09-16 04:40:52.232 [INFO][4793] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:40:52.541233 containerd[2009]: 2025-09-16 04:40:52.232 [INFO][4793] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-59' Sep 16 04:40:52.541233 containerd[2009]: 2025-09-16 04:40:52.262 [INFO][4793] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" host="ip-172-31-31-59" Sep 16 04:40:52.541233 containerd[2009]: 2025-09-16 04:40:52.273 [INFO][4793] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-59" Sep 16 04:40:52.541233 containerd[2009]: 2025-09-16 04:40:52.290 [INFO][4793] ipam/ipam.go 511: Trying affinity for 192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:52.541233 containerd[2009]: 2025-09-16 04:40:52.294 [INFO][4793] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:52.541233 containerd[2009]: 2025-09-16 04:40:52.310 [INFO][4793] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:52.541691 containerd[2009]: 2025-09-16 04:40:52.310 [INFO][4793] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" host="ip-172-31-31-59" Sep 16 04:40:52.541691 containerd[2009]: 2025-09-16 04:40:52.327 [INFO][4793] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242 Sep 16 04:40:52.541691 containerd[2009]: 2025-09-16 04:40:52.347 [INFO][4793] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" host="ip-172-31-31-59" Sep 16 04:40:52.541691 containerd[2009]: 2025-09-16 04:40:52.407 [INFO][4793] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.36.66/26] block=192.168.36.64/26 handle="k8s-pod-network.e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" host="ip-172-31-31-59" Sep 16 04:40:52.541691 containerd[2009]: 2025-09-16 04:40:52.407 [INFO][4793] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.66/26] handle="k8s-pod-network.e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" host="ip-172-31-31-59" Sep 16 04:40:52.541691 containerd[2009]: 2025-09-16 04:40:52.407 [INFO][4793] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:40:52.541691 containerd[2009]: 2025-09-16 04:40:52.407 [INFO][4793] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.66/26] IPv6=[] ContainerID="e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" HandleID="k8s-pod-network.e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" Workload="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--2psjs-eth0" Sep 16 04:40:52.542009 containerd[2009]: 2025-09-16 04:40:52.420 [INFO][4781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" Namespace="calico-apiserver" Pod="calico-apiserver-58c96c794b-2psjs" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--2psjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--2psjs-eth0", GenerateName:"calico-apiserver-58c96c794b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9764fd8b-561c-41e6-b1ba-67f352cd8f06", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 40, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c96c794b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-59", ContainerID:"", Pod:"calico-apiserver-58c96c794b-2psjs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05b63e36290", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:40:52.542123 containerd[2009]: 2025-09-16 04:40:52.420 [INFO][4781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.66/32] ContainerID="e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" Namespace="calico-apiserver" Pod="calico-apiserver-58c96c794b-2psjs" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--2psjs-eth0" Sep 16 04:40:52.542123 containerd[2009]: 2025-09-16 04:40:52.420 [INFO][4781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05b63e36290 ContainerID="e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" Namespace="calico-apiserver" Pod="calico-apiserver-58c96c794b-2psjs" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--2psjs-eth0" Sep 16 04:40:52.542123 containerd[2009]: 2025-09-16 04:40:52.429 [INFO][4781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" Namespace="calico-apiserver" Pod="calico-apiserver-58c96c794b-2psjs" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--2psjs-eth0" Sep 16 04:40:52.542269 containerd[2009]: 2025-09-16 04:40:52.430 [INFO][4781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" Namespace="calico-apiserver" Pod="calico-apiserver-58c96c794b-2psjs" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--2psjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--2psjs-eth0", GenerateName:"calico-apiserver-58c96c794b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9764fd8b-561c-41e6-b1ba-67f352cd8f06", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 40, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c96c794b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-59", ContainerID:"e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242", Pod:"calico-apiserver-58c96c794b-2psjs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05b63e36290", MAC:"8e:82:13:7b:e2:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:40:52.542377 containerd[2009]: 2025-09-16 04:40:52.534 [INFO][4781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" Namespace="calico-apiserver" Pod="calico-apiserver-58c96c794b-2psjs" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--2psjs-eth0" Sep 16 04:40:52.616669 containerd[2009]: time="2025-09-16T04:40:52.614548542Z" level=info msg="connecting to shim e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242" address="unix:///run/containerd/s/fcd45ec654fcd4e05d9a19322f5c36f2dcdffc6625f246630f1b6db24b9b4ee6" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:40:52.690575 sshd[4802]: Accepted publickey for core from 147.75.109.163 port 32828 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:40:52.698582 sshd-session[4802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:40:52.727343 systemd-logind[1982]: New session 8 of user core. Sep 16 04:40:52.742036 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 04:40:52.762119 systemd[1]: Started cri-containerd-e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242.scope - libcontainer container e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242. Sep 16 04:40:52.810619 systemd-networkd[1898]: calicf66b4c3eaf: Gained IPv6LL Sep 16 04:40:52.982459 containerd[2009]: time="2025-09-16T04:40:52.982117940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-9bw49,Uid:7330b840-14e1-402c-85da-147dbe5c0a4a,Namespace:calico-system,Attempt:0,}" Sep 16 04:40:53.323526 sshd[4848]: Connection closed by 147.75.109.163 port 32828 Sep 16 04:40:53.325033 sshd-session[4802]: pam_unix(sshd:session): session closed for user core Sep 16 04:40:53.337083 systemd[1]: sshd@7-172.31.31.59:22-147.75.109.163:32828.service: Deactivated successfully. Sep 16 04:40:53.349479 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 04:40:53.352069 systemd-logind[1982]: Session 8 logged out. Waiting for processes to exit. Sep 16 04:40:53.359741 systemd-logind[1982]: Removed session 8. Sep 16 04:40:53.431665 containerd[2009]: time="2025-09-16T04:40:53.431367474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:53.434590 containerd[2009]: time="2025-09-16T04:40:53.434398230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 16 04:40:53.440898 containerd[2009]: time="2025-09-16T04:40:53.440551230Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:53.459808 containerd[2009]: time="2025-09-16T04:40:53.458888526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:53.473665 containerd[2009]: time="2025-09-16T04:40:53.473507574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 2.014436554s" Sep 16 04:40:53.473665 containerd[2009]: time="2025-09-16T04:40:53.473572002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 16 04:40:53.485356 containerd[2009]: time="2025-09-16T04:40:53.485294023Z" level=info msg="CreateContainer within sandbox \"134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 16 04:40:53.520674 containerd[2009]: time="2025-09-16T04:40:53.518519179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c96c794b-2psjs,Uid:9764fd8b-561c-41e6-b1ba-67f352cd8f06,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242\"" Sep 16 04:40:53.527385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1756078113.mount: Deactivated successfully. Sep 16 04:40:53.542301 containerd[2009]: time="2025-09-16T04:40:53.540213907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 04:40:53.552253 containerd[2009]: time="2025-09-16T04:40:53.552178987Z" level=info msg="Container 2fe29e3b8f8a5436d0dbc897597c1c7e304beeeccc00f419d627da58b3f2a57f: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:40:53.580395 containerd[2009]: time="2025-09-16T04:40:53.580011451Z" level=info msg="CreateContainer within sandbox \"134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2fe29e3b8f8a5436d0dbc897597c1c7e304beeeccc00f419d627da58b3f2a57f\"" Sep 16 04:40:53.585831 containerd[2009]: time="2025-09-16T04:40:53.583817131Z" level=info msg="StartContainer for \"2fe29e3b8f8a5436d0dbc897597c1c7e304beeeccc00f419d627da58b3f2a57f\"" Sep 16 04:40:53.589369 containerd[2009]: time="2025-09-16T04:40:53.588429955Z" level=info msg="connecting to shim 2fe29e3b8f8a5436d0dbc897597c1c7e304beeeccc00f419d627da58b3f2a57f" address="unix:///run/containerd/s/a1fba73a3881d4e7c305fe72b14c35e7f8b046e50c817142060d82226956d4a4" protocol=ttrpc version=3 Sep 16 04:40:53.684889 systemd[1]: Started cri-containerd-2fe29e3b8f8a5436d0dbc897597c1c7e304beeeccc00f419d627da58b3f2a57f.scope - libcontainer container 2fe29e3b8f8a5436d0dbc897597c1c7e304beeeccc00f419d627da58b3f2a57f. Sep 16 04:40:53.696108 systemd-networkd[1898]: cali75c314ef343: Link UP Sep 16 04:40:53.702070 systemd-networkd[1898]: cali75c314ef343: Gained carrier Sep 16 04:40:53.753068 containerd[2009]: 2025-09-16 04:40:53.205 [INFO][4871] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:40:53.753068 containerd[2009]: 2025-09-16 04:40:53.308 [INFO][4871] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--59-k8s-goldmane--54d579b49d--9bw49-eth0 goldmane-54d579b49d- calico-system 7330b840-14e1-402c-85da-147dbe5c0a4a 829 0 2025-09-16 04:40:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-31-59 goldmane-54d579b49d-9bw49 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali75c314ef343 [] [] }} ContainerID="755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" Namespace="calico-system" Pod="goldmane-54d579b49d-9bw49" WorkloadEndpoint="ip--172--31--31--59-k8s-goldmane--54d579b49d--9bw49-" Sep 16 04:40:53.753068 containerd[2009]: 2025-09-16 04:40:53.308 [INFO][4871] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" Namespace="calico-system" Pod="goldmane-54d579b49d-9bw49" WorkloadEndpoint="ip--172--31--31--59-k8s-goldmane--54d579b49d--9bw49-eth0" Sep 16 04:40:53.753068 containerd[2009]: 2025-09-16 04:40:53.500 [INFO][4891] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" HandleID="k8s-pod-network.755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" Workload="ip--172--31--31--59-k8s-goldmane--54d579b49d--9bw49-eth0" Sep 16 04:40:53.753448 containerd[2009]: 2025-09-16 04:40:53.501 [INFO][4891] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" HandleID="k8s-pod-network.755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" Workload="ip--172--31--31--59-k8s-goldmane--54d579b49d--9bw49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3890), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-59", "pod":"goldmane-54d579b49d-9bw49", "timestamp":"2025-09-16 04:40:53.500609551 +0000 UTC"}, Hostname:"ip-172-31-31-59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:40:53.753448 containerd[2009]: 2025-09-16 04:40:53.501 [INFO][4891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:40:53.753448 containerd[2009]: 2025-09-16 04:40:53.501 [INFO][4891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:40:53.753448 containerd[2009]: 2025-09-16 04:40:53.501 [INFO][4891] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-59' Sep 16 04:40:53.753448 containerd[2009]: 2025-09-16 04:40:53.558 [INFO][4891] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" host="ip-172-31-31-59" Sep 16 04:40:53.753448 containerd[2009]: 2025-09-16 04:40:53.572 [INFO][4891] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-59" Sep 16 04:40:53.753448 containerd[2009]: 2025-09-16 04:40:53.592 [INFO][4891] ipam/ipam.go 511: Trying affinity for 192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:53.753448 containerd[2009]: 2025-09-16 04:40:53.616 [INFO][4891] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:53.753448 containerd[2009]: 2025-09-16 04:40:53.633 [INFO][4891] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:53.753448 containerd[2009]: 2025-09-16 04:40:53.633 [INFO][4891] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" host="ip-172-31-31-59" Sep 16 04:40:53.753986 containerd[2009]: 2025-09-16 04:40:53.638 [INFO][4891] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe Sep 16 04:40:53.753986 containerd[2009]: 2025-09-16 04:40:53.648 [INFO][4891] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" host="ip-172-31-31-59" Sep 16 04:40:53.753986 containerd[2009]: 2025-09-16 04:40:53.668 [INFO][4891] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.36.67/26] block=192.168.36.64/26 handle="k8s-pod-network.755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" host="ip-172-31-31-59" Sep 16 04:40:53.753986 containerd[2009]: 2025-09-16 04:40:53.668 [INFO][4891] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.67/26] handle="k8s-pod-network.755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" host="ip-172-31-31-59" Sep 16 04:40:53.753986 containerd[2009]: 2025-09-16 04:40:53.671 [INFO][4891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:40:53.753986 containerd[2009]: 2025-09-16 04:40:53.671 [INFO][4891] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.67/26] IPv6=[] ContainerID="755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" HandleID="k8s-pod-network.755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" Workload="ip--172--31--31--59-k8s-goldmane--54d579b49d--9bw49-eth0" Sep 16 04:40:53.754271 containerd[2009]: 2025-09-16 04:40:53.681 [INFO][4871] cni-plugin/k8s.go 418: Populated endpoint ContainerID="755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" Namespace="calico-system" Pod="goldmane-54d579b49d-9bw49" WorkloadEndpoint="ip--172--31--31--59-k8s-goldmane--54d579b49d--9bw49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--59-k8s-goldmane--54d579b49d--9bw49-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"7330b840-14e1-402c-85da-147dbe5c0a4a", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 40, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-59", ContainerID:"", Pod:"goldmane-54d579b49d-9bw49", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali75c314ef343", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:40:53.754271 containerd[2009]: 2025-09-16 04:40:53.682 [INFO][4871] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.67/32] ContainerID="755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" Namespace="calico-system" Pod="goldmane-54d579b49d-9bw49" WorkloadEndpoint="ip--172--31--31--59-k8s-goldmane--54d579b49d--9bw49-eth0" Sep 16 04:40:53.754442 containerd[2009]: 2025-09-16 04:40:53.682 [INFO][4871] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75c314ef343 ContainerID="755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" Namespace="calico-system" Pod="goldmane-54d579b49d-9bw49" WorkloadEndpoint="ip--172--31--31--59-k8s-goldmane--54d579b49d--9bw49-eth0" Sep 16 04:40:53.754442 containerd[2009]: 2025-09-16 04:40:53.708 [INFO][4871] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" Namespace="calico-system" Pod="goldmane-54d579b49d-9bw49" WorkloadEndpoint="ip--172--31--31--59-k8s-goldmane--54d579b49d--9bw49-eth0" Sep 16 04:40:53.754531 containerd[2009]: 2025-09-16 04:40:53.710 [INFO][4871] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" Namespace="calico-system" Pod="goldmane-54d579b49d-9bw49" WorkloadEndpoint="ip--172--31--31--59-k8s-goldmane--54d579b49d--9bw49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--59-k8s-goldmane--54d579b49d--9bw49-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"7330b840-14e1-402c-85da-147dbe5c0a4a", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 40, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-59", ContainerID:"755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe", Pod:"goldmane-54d579b49d-9bw49", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali75c314ef343", MAC:"82:a1:6a:36:24:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:40:53.755628 containerd[2009]: 2025-09-16 04:40:53.741 [INFO][4871] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" Namespace="calico-system" Pod="goldmane-54d579b49d-9bw49" WorkloadEndpoint="ip--172--31--31--59-k8s-goldmane--54d579b49d--9bw49-eth0" Sep 16 04:40:53.769965 systemd-networkd[1898]: cali05b63e36290: Gained IPv6LL Sep 16 04:40:53.852032 containerd[2009]: time="2025-09-16T04:40:53.851825372Z" level=info msg="connecting to shim 755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe" address="unix:///run/containerd/s/c48af64d0bc6e3897a2712514824540fd79fb91c060e896d977efd2930fad254" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:40:53.947081 systemd[1]: Started cri-containerd-755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe.scope - libcontainer container 755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe. Sep 16 04:40:53.984426 containerd[2009]: time="2025-09-16T04:40:53.982747377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cp4p4,Uid:838fa562-42c6-4d1d-a9fb-07d8f39bc7c5,Namespace:calico-system,Attempt:0,}" Sep 16 04:40:54.191004 containerd[2009]: time="2025-09-16T04:40:54.190860846Z" level=info msg="StartContainer for \"2fe29e3b8f8a5436d0dbc897597c1c7e304beeeccc00f419d627da58b3f2a57f\" returns successfully" Sep 16 04:40:54.340833 containerd[2009]: time="2025-09-16T04:40:54.340681399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-9bw49,Uid:7330b840-14e1-402c-85da-147dbe5c0a4a,Namespace:calico-system,Attempt:0,} returns sandbox id \"755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe\"" Sep 16 04:40:54.361368 containerd[2009]: time="2025-09-16T04:40:54.361197607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fe0c3d289715b04f84c366060a60aa57bc1dd53cd8d63f525b8c8283ac8563c\" id:\"a52736f08d060885a45e3b3a720389261cbc90087f3285bc74fbc8b7623847de\" pid:4721 exit_status:1 exited_at:{seconds:1757997654 nanos:355921603}" Sep 16 04:40:54.449351 systemd-networkd[1898]: cali8517cabe7be: Link UP Sep 16 04:40:54.451908 systemd-networkd[1898]: cali8517cabe7be: Gained carrier Sep 16 04:40:54.492167 containerd[2009]: 2025-09-16 04:40:54.145 [INFO][4993] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--59-k8s-csi--node--driver--cp4p4-eth0 csi-node-driver- calico-system 838fa562-42c6-4d1d-a9fb-07d8f39bc7c5 692 0 2025-09-16 04:40:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-31-59 csi-node-driver-cp4p4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8517cabe7be [] [] }} ContainerID="02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" Namespace="calico-system" Pod="csi-node-driver-cp4p4" WorkloadEndpoint="ip--172--31--31--59-k8s-csi--node--driver--cp4p4-" Sep 16 04:40:54.492167 containerd[2009]: 2025-09-16 04:40:54.145 [INFO][4993] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" Namespace="calico-system" Pod="csi-node-driver-cp4p4" WorkloadEndpoint="ip--172--31--31--59-k8s-csi--node--driver--cp4p4-eth0" Sep 16 04:40:54.492167 containerd[2009]: 2025-09-16 04:40:54.297 [INFO][5019] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" HandleID="k8s-pod-network.02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" Workload="ip--172--31--31--59-k8s-csi--node--driver--cp4p4-eth0" Sep 16 04:40:54.493351 containerd[2009]: 2025-09-16 04:40:54.297 [INFO][5019] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" HandleID="k8s-pod-network.02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" Workload="ip--172--31--31--59-k8s-csi--node--driver--cp4p4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b0a50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-59", "pod":"csi-node-driver-cp4p4", "timestamp":"2025-09-16 04:40:54.297526735 +0000 UTC"}, Hostname:"ip-172-31-31-59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:40:54.493351 containerd[2009]: 2025-09-16 04:40:54.297 [INFO][5019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:40:54.493351 containerd[2009]: 2025-09-16 04:40:54.297 [INFO][5019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:40:54.493351 containerd[2009]: 2025-09-16 04:40:54.298 [INFO][5019] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-59' Sep 16 04:40:54.493351 containerd[2009]: 2025-09-16 04:40:54.332 [INFO][5019] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" host="ip-172-31-31-59" Sep 16 04:40:54.493351 containerd[2009]: 2025-09-16 04:40:54.347 [INFO][5019] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-59" Sep 16 04:40:54.493351 containerd[2009]: 2025-09-16 04:40:54.367 [INFO][5019] ipam/ipam.go 511: Trying affinity for 192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:54.493351 containerd[2009]: 2025-09-16 04:40:54.376 [INFO][5019] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:54.493351 containerd[2009]: 2025-09-16 04:40:54.389 [INFO][5019] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:54.493351 containerd[2009]: 2025-09-16 04:40:54.389 [INFO][5019] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" host="ip-172-31-31-59" Sep 16 04:40:54.493904 containerd[2009]: 2025-09-16 04:40:54.394 [INFO][5019] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e Sep 16 04:40:54.493904 containerd[2009]: 2025-09-16 04:40:54.411 [INFO][5019] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" host="ip-172-31-31-59" Sep 16 04:40:54.493904 containerd[2009]: 2025-09-16 04:40:54.435 [INFO][5019] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.36.68/26] block=192.168.36.64/26 handle="k8s-pod-network.02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" host="ip-172-31-31-59" Sep 16 04:40:54.493904 containerd[2009]: 2025-09-16 04:40:54.435 [INFO][5019] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.68/26] handle="k8s-pod-network.02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" host="ip-172-31-31-59" Sep 16 04:40:54.493904 containerd[2009]: 2025-09-16 04:40:54.435 [INFO][5019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:40:54.493904 containerd[2009]: 2025-09-16 04:40:54.435 [INFO][5019] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.68/26] IPv6=[] ContainerID="02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" HandleID="k8s-pod-network.02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" Workload="ip--172--31--31--59-k8s-csi--node--driver--cp4p4-eth0" Sep 16 04:40:54.494188 containerd[2009]: 2025-09-16 04:40:54.442 [INFO][4993] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" Namespace="calico-system" Pod="csi-node-driver-cp4p4" WorkloadEndpoint="ip--172--31--31--59-k8s-csi--node--driver--cp4p4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--59-k8s-csi--node--driver--cp4p4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"838fa562-42c6-4d1d-a9fb-07d8f39bc7c5", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 40, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-59", ContainerID:"", Pod:"csi-node-driver-cp4p4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8517cabe7be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:40:54.494306 containerd[2009]: 2025-09-16 04:40:54.442 [INFO][4993] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.68/32] ContainerID="02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" Namespace="calico-system" Pod="csi-node-driver-cp4p4" WorkloadEndpoint="ip--172--31--31--59-k8s-csi--node--driver--cp4p4-eth0" Sep 16 04:40:54.494306 containerd[2009]: 2025-09-16 04:40:54.443 [INFO][4993] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8517cabe7be ContainerID="02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" Namespace="calico-system" Pod="csi-node-driver-cp4p4" WorkloadEndpoint="ip--172--31--31--59-k8s-csi--node--driver--cp4p4-eth0" Sep 16 04:40:54.494306 containerd[2009]: 2025-09-16 04:40:54.451 [INFO][4993] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" Namespace="calico-system" Pod="csi-node-driver-cp4p4" WorkloadEndpoint="ip--172--31--31--59-k8s-csi--node--driver--cp4p4-eth0" Sep 16 04:40:54.494476 containerd[2009]: 2025-09-16 04:40:54.454 [INFO][4993] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" Namespace="calico-system" Pod="csi-node-driver-cp4p4" WorkloadEndpoint="ip--172--31--31--59-k8s-csi--node--driver--cp4p4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--59-k8s-csi--node--driver--cp4p4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"838fa562-42c6-4d1d-a9fb-07d8f39bc7c5", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 40, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-59", ContainerID:"02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e", Pod:"csi-node-driver-cp4p4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8517cabe7be", MAC:"fa:a6:5e:54:87:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:40:54.494633 containerd[2009]: 2025-09-16 04:40:54.483 [INFO][4993] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" Namespace="calico-system" Pod="csi-node-driver-cp4p4" WorkloadEndpoint="ip--172--31--31--59-k8s-csi--node--driver--cp4p4-eth0" Sep 16 04:40:54.570672 containerd[2009]: time="2025-09-16T04:40:54.569171180Z" level=info msg="connecting to shim 02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e" address="unix:///run/containerd/s/ee827644c3236b54c76128062ef07b43661ae83f52a273acf1ba2c360606da61" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:40:54.664773 systemd[1]: Started cri-containerd-02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e.scope - libcontainer container 02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e. Sep 16 04:40:54.790478 containerd[2009]: time="2025-09-16T04:40:54.790404669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cp4p4,Uid:838fa562-42c6-4d1d-a9fb-07d8f39bc7c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e\"" Sep 16 04:40:54.978584 containerd[2009]: time="2025-09-16T04:40:54.978519982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c96c794b-l57qh,Uid:ea374977-c50e-40f3-900e-1191409caa19,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:40:54.992384 containerd[2009]: time="2025-09-16T04:40:54.992211442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6j7l5,Uid:20c27947-78fe-4af7-b5ae-3ade9032d31b,Namespace:kube-system,Attempt:0,}" Sep 16 04:40:55.502600 systemd-networkd[1898]: cali75c314ef343: Gained IPv6LL Sep 16 04:40:55.503088 systemd-networkd[1898]: cali2c2ee6119b4: Link UP Sep 16 04:40:55.505864 systemd-networkd[1898]: cali2c2ee6119b4: Gained carrier Sep 16 04:40:55.572290 containerd[2009]: 2025-09-16 04:40:55.191 [INFO][5105] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--l57qh-eth0 calico-apiserver-58c96c794b- calico-apiserver ea374977-c50e-40f3-900e-1191409caa19 825 0 2025-09-16 04:40:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58c96c794b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-59 calico-apiserver-58c96c794b-l57qh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2c2ee6119b4 [] [] }} ContainerID="4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" Namespace="calico-apiserver" Pod="calico-apiserver-58c96c794b-l57qh" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--l57qh-" Sep 16 04:40:55.572290 containerd[2009]: 2025-09-16 04:40:55.193 [INFO][5105] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" Namespace="calico-apiserver" Pod="calico-apiserver-58c96c794b-l57qh" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--l57qh-eth0" Sep 16 04:40:55.572290 containerd[2009]: 2025-09-16 04:40:55.357 [INFO][5139] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" HandleID="k8s-pod-network.4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" Workload="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--l57qh-eth0" Sep 16 04:40:55.572888 containerd[2009]: 2025-09-16 04:40:55.357 [INFO][5139] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" HandleID="k8s-pod-network.4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" Workload="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--l57qh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-59", "pod":"calico-apiserver-58c96c794b-l57qh", "timestamp":"2025-09-16 04:40:55.356975024 +0000 UTC"}, Hostname:"ip-172-31-31-59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:40:55.572888 containerd[2009]: 2025-09-16 04:40:55.357 [INFO][5139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:40:55.572888 containerd[2009]: 2025-09-16 04:40:55.359 [INFO][5139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:40:55.572888 containerd[2009]: 2025-09-16 04:40:55.359 [INFO][5139] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-59' Sep 16 04:40:55.572888 containerd[2009]: 2025-09-16 04:40:55.380 [INFO][5139] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" host="ip-172-31-31-59" Sep 16 04:40:55.572888 containerd[2009]: 2025-09-16 04:40:55.399 [INFO][5139] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-59" Sep 16 04:40:55.572888 containerd[2009]: 2025-09-16 04:40:55.413 [INFO][5139] ipam/ipam.go 511: Trying affinity for 192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:55.572888 containerd[2009]: 2025-09-16 04:40:55.425 [INFO][5139] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:55.572888 containerd[2009]: 2025-09-16 04:40:55.436 [INFO][5139] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:55.573340 containerd[2009]: 2025-09-16 04:40:55.437 [INFO][5139] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" host="ip-172-31-31-59" Sep 16 04:40:55.573340 containerd[2009]: 2025-09-16 04:40:55.447 [INFO][5139] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4 Sep 16 04:40:55.573340 containerd[2009]: 2025-09-16 04:40:55.460 [INFO][5139] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" host="ip-172-31-31-59" Sep 16 04:40:55.573340 containerd[2009]: 2025-09-16 04:40:55.478 [INFO][5139] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.36.69/26] block=192.168.36.64/26 handle="k8s-pod-network.4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" host="ip-172-31-31-59" Sep 16 04:40:55.573340 containerd[2009]: 2025-09-16 04:40:55.479 [INFO][5139] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.69/26] handle="k8s-pod-network.4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" host="ip-172-31-31-59" Sep 16 04:40:55.573340 containerd[2009]: 2025-09-16 04:40:55.479 [INFO][5139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:40:55.573340 containerd[2009]: 2025-09-16 04:40:55.479 [INFO][5139] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.69/26] IPv6=[] ContainerID="4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" HandleID="k8s-pod-network.4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" Workload="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--l57qh-eth0" Sep 16 04:40:55.574743 containerd[2009]: 2025-09-16 04:40:55.486 [INFO][5105] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" Namespace="calico-apiserver" Pod="calico-apiserver-58c96c794b-l57qh" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--l57qh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--l57qh-eth0", GenerateName:"calico-apiserver-58c96c794b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea374977-c50e-40f3-900e-1191409caa19", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 40, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c96c794b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-59", ContainerID:"", Pod:"calico-apiserver-58c96c794b-l57qh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c2ee6119b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:40:55.574983 containerd[2009]: 2025-09-16 04:40:55.486 [INFO][5105] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.69/32] ContainerID="4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" Namespace="calico-apiserver" Pod="calico-apiserver-58c96c794b-l57qh" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--l57qh-eth0" Sep 16 04:40:55.574983 containerd[2009]: 2025-09-16 04:40:55.486 [INFO][5105] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c2ee6119b4 ContainerID="4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" Namespace="calico-apiserver" Pod="calico-apiserver-58c96c794b-l57qh" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--l57qh-eth0" Sep 16 04:40:55.574983 containerd[2009]: 2025-09-16 04:40:55.511 [INFO][5105] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" Namespace="calico-apiserver" Pod="calico-apiserver-58c96c794b-l57qh" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--l57qh-eth0" Sep 16 04:40:55.575179 containerd[2009]: 2025-09-16 04:40:55.517 [INFO][5105] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" Namespace="calico-apiserver" Pod="calico-apiserver-58c96c794b-l57qh" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--l57qh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--l57qh-eth0", GenerateName:"calico-apiserver-58c96c794b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea374977-c50e-40f3-900e-1191409caa19", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 40, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c96c794b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-59", ContainerID:"4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4", Pod:"calico-apiserver-58c96c794b-l57qh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c2ee6119b4", MAC:"c2:ca:60:51:2d:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:40:55.577019 containerd[2009]: 2025-09-16 04:40:55.563 [INFO][5105] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" Namespace="calico-apiserver" Pod="calico-apiserver-58c96c794b-l57qh" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--apiserver--58c96c794b--l57qh-eth0" Sep 16 04:40:55.663963 systemd-networkd[1898]: cali9b039c0f376: Link UP Sep 16 04:40:55.682206 systemd-networkd[1898]: cali9b039c0f376: Gained carrier Sep 16 04:40:55.733277 containerd[2009]: time="2025-09-16T04:40:55.729009610Z" level=info msg="connecting to shim 4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4" address="unix:///run/containerd/s/094c5c594d6f6296fd51d5592f007efaa16cf22510a6537489d5674db411c233" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:40:55.762405 containerd[2009]: 2025-09-16 04:40:55.268 [INFO][5103] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--59-k8s-coredns--668d6bf9bc--6j7l5-eth0 coredns-668d6bf9bc- kube-system 20c27947-78fe-4af7-b5ae-3ade9032d31b 824 0 2025-09-16 04:40:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-59 coredns-668d6bf9bc-6j7l5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9b039c0f376 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" Namespace="kube-system" Pod="coredns-668d6bf9bc-6j7l5" WorkloadEndpoint="ip--172--31--31--59-k8s-coredns--668d6bf9bc--6j7l5-" Sep 16 04:40:55.762405 containerd[2009]: 2025-09-16 04:40:55.268 [INFO][5103] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" Namespace="kube-system" Pod="coredns-668d6bf9bc-6j7l5" WorkloadEndpoint="ip--172--31--31--59-k8s-coredns--668d6bf9bc--6j7l5-eth0" Sep 16 04:40:55.762405 containerd[2009]: 2025-09-16 04:40:55.472 [INFO][5148] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" HandleID="k8s-pod-network.2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" Workload="ip--172--31--31--59-k8s-coredns--668d6bf9bc--6j7l5-eth0" Sep 16 04:40:55.762987 containerd[2009]: 2025-09-16 04:40:55.472 [INFO][5148] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" HandleID="k8s-pod-network.2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" Workload="ip--172--31--31--59-k8s-coredns--668d6bf9bc--6j7l5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024bdb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-59", "pod":"coredns-668d6bf9bc-6j7l5", "timestamp":"2025-09-16 04:40:55.472174784 +0000 UTC"}, Hostname:"ip-172-31-31-59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:40:55.762987 containerd[2009]: 2025-09-16 04:40:55.472 [INFO][5148] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:40:55.762987 containerd[2009]: 2025-09-16 04:40:55.479 [INFO][5148] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:40:55.762987 containerd[2009]: 2025-09-16 04:40:55.479 [INFO][5148] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-59' Sep 16 04:40:55.762987 containerd[2009]: 2025-09-16 04:40:55.543 [INFO][5148] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" host="ip-172-31-31-59" Sep 16 04:40:55.762987 containerd[2009]: 2025-09-16 04:40:55.566 [INFO][5148] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-59" Sep 16 04:40:55.762987 containerd[2009]: 2025-09-16 04:40:55.584 [INFO][5148] ipam/ipam.go 511: Trying affinity for 192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:55.762987 containerd[2009]: 2025-09-16 04:40:55.592 [INFO][5148] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:55.762987 containerd[2009]: 2025-09-16 04:40:55.601 [INFO][5148] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:55.762987 containerd[2009]: 2025-09-16 04:40:55.601 [INFO][5148] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" host="ip-172-31-31-59" Sep 16 04:40:55.764572 containerd[2009]: 2025-09-16 04:40:55.606 [INFO][5148] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9 Sep 16 04:40:55.764572 containerd[2009]: 2025-09-16 04:40:55.620 [INFO][5148] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" host="ip-172-31-31-59" Sep 16 04:40:55.764572 containerd[2009]: 2025-09-16 04:40:55.637 [INFO][5148] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.36.70/26] block=192.168.36.64/26 handle="k8s-pod-network.2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" host="ip-172-31-31-59" Sep 16 04:40:55.764572 containerd[2009]: 2025-09-16 04:40:55.637 [INFO][5148] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.70/26] handle="k8s-pod-network.2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" host="ip-172-31-31-59" Sep 16 04:40:55.764572 containerd[2009]: 2025-09-16 04:40:55.638 [INFO][5148] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:40:55.764572 containerd[2009]: 2025-09-16 04:40:55.638 [INFO][5148] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.70/26] IPv6=[] ContainerID="2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" HandleID="k8s-pod-network.2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" Workload="ip--172--31--31--59-k8s-coredns--668d6bf9bc--6j7l5-eth0" Sep 16 04:40:55.766270 containerd[2009]: 2025-09-16 04:40:55.647 [INFO][5103] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" Namespace="kube-system" Pod="coredns-668d6bf9bc-6j7l5" WorkloadEndpoint="ip--172--31--31--59-k8s-coredns--668d6bf9bc--6j7l5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--59-k8s-coredns--668d6bf9bc--6j7l5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"20c27947-78fe-4af7-b5ae-3ade9032d31b", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 40, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-59", ContainerID:"", Pod:"coredns-668d6bf9bc-6j7l5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b039c0f376", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:40:55.766270 containerd[2009]: 2025-09-16 04:40:55.648 [INFO][5103] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.70/32] ContainerID="2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" Namespace="kube-system" Pod="coredns-668d6bf9bc-6j7l5" WorkloadEndpoint="ip--172--31--31--59-k8s-coredns--668d6bf9bc--6j7l5-eth0" Sep 16 04:40:55.766270 containerd[2009]: 2025-09-16 04:40:55.648 [INFO][5103] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b039c0f376 ContainerID="2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" Namespace="kube-system" Pod="coredns-668d6bf9bc-6j7l5" WorkloadEndpoint="ip--172--31--31--59-k8s-coredns--668d6bf9bc--6j7l5-eth0" Sep 16 04:40:55.766270 containerd[2009]: 2025-09-16 04:40:55.681 [INFO][5103] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" Namespace="kube-system" Pod="coredns-668d6bf9bc-6j7l5" WorkloadEndpoint="ip--172--31--31--59-k8s-coredns--668d6bf9bc--6j7l5-eth0" Sep 16 04:40:55.766270 containerd[2009]: 2025-09-16 04:40:55.704 [INFO][5103] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" Namespace="kube-system" Pod="coredns-668d6bf9bc-6j7l5" WorkloadEndpoint="ip--172--31--31--59-k8s-coredns--668d6bf9bc--6j7l5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--59-k8s-coredns--668d6bf9bc--6j7l5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"20c27947-78fe-4af7-b5ae-3ade9032d31b", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 40, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-59", ContainerID:"2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9", Pod:"coredns-668d6bf9bc-6j7l5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b039c0f376", MAC:"e2:ef:2b:2a:bc:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:40:55.766270 containerd[2009]: 2025-09-16 04:40:55.735 [INFO][5103] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" Namespace="kube-system" Pod="coredns-668d6bf9bc-6j7l5" WorkloadEndpoint="ip--172--31--31--59-k8s-coredns--668d6bf9bc--6j7l5-eth0" Sep 16 04:40:55.874002 systemd[1]: Started cri-containerd-4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4.scope - libcontainer container 4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4. Sep 16 04:40:55.883156 systemd-networkd[1898]: cali8517cabe7be: Gained IPv6LL Sep 16 04:40:55.909022 (udev-worker)[4548]: Network interface NamePolicy= disabled on kernel command line. Sep 16 04:40:55.921920 containerd[2009]: time="2025-09-16T04:40:55.919004663Z" level=info msg="connecting to shim 2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9" address="unix:///run/containerd/s/3a5167963aa514ba7174924b1dd06ccf7385c966231a53dcd30134c5223abde9" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:40:55.929228 systemd-networkd[1898]: vxlan.calico: Link UP Sep 16 04:40:55.930812 systemd-networkd[1898]: vxlan.calico: Gained carrier Sep 16 04:40:55.975843 containerd[2009]: time="2025-09-16T04:40:55.975001295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qvfrd,Uid:58ba772f-28f4-4f9c-bcd4-9c91f3a85256,Namespace:kube-system,Attempt:0,}" Sep 16 04:40:56.053993 systemd[1]: Started cri-containerd-2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9.scope - libcontainer container 2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9. Sep 16 04:40:56.364896 containerd[2009]: time="2025-09-16T04:40:56.362374917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c96c794b-l57qh,Uid:ea374977-c50e-40f3-900e-1191409caa19,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4\"" Sep 16 04:40:56.373691 containerd[2009]: time="2025-09-16T04:40:56.371605197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6j7l5,Uid:20c27947-78fe-4af7-b5ae-3ade9032d31b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9\"" Sep 16 04:40:56.388460 containerd[2009]: time="2025-09-16T04:40:56.388391397Z" level=info msg="CreateContainer within sandbox \"2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:40:56.417762 containerd[2009]: time="2025-09-16T04:40:56.417314565Z" level=info msg="Container 685250b35c92a2e7c03f32925b0f0fce30013e7e3010312b4016ac3e1823f1a2: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:40:56.443376 containerd[2009]: time="2025-09-16T04:40:56.443296605Z" level=info msg="CreateContainer within sandbox \"2025ed7395316b89de1ef79297d82135f16cf2bcc1c3a953f9c6fea9b4b4aab9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"685250b35c92a2e7c03f32925b0f0fce30013e7e3010312b4016ac3e1823f1a2\"" Sep 16 04:40:56.447151 containerd[2009]: time="2025-09-16T04:40:56.447086649Z" level=info msg="StartContainer for \"685250b35c92a2e7c03f32925b0f0fce30013e7e3010312b4016ac3e1823f1a2\"" Sep 16 04:40:56.463220 containerd[2009]: time="2025-09-16T04:40:56.463152393Z" level=info msg="connecting to shim 685250b35c92a2e7c03f32925b0f0fce30013e7e3010312b4016ac3e1823f1a2" address="unix:///run/containerd/s/3a5167963aa514ba7174924b1dd06ccf7385c966231a53dcd30134c5223abde9" protocol=ttrpc version=3 Sep 16 04:40:56.559219 systemd[1]: Started cri-containerd-685250b35c92a2e7c03f32925b0f0fce30013e7e3010312b4016ac3e1823f1a2.scope - libcontainer container 685250b35c92a2e7c03f32925b0f0fce30013e7e3010312b4016ac3e1823f1a2. Sep 16 04:40:56.585825 systemd-networkd[1898]: cali2c2ee6119b4: Gained IPv6LL Sep 16 04:40:56.748055 systemd-networkd[1898]: cali020008b3bc4: Link UP Sep 16 04:40:56.751223 systemd-networkd[1898]: cali020008b3bc4: Gained carrier Sep 16 04:40:56.755681 containerd[2009]: time="2025-09-16T04:40:56.755583059Z" level=info msg="StartContainer for \"685250b35c92a2e7c03f32925b0f0fce30013e7e3010312b4016ac3e1823f1a2\" returns successfully" Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.372 [INFO][5239] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--59-k8s-coredns--668d6bf9bc--qvfrd-eth0 coredns-668d6bf9bc- kube-system 58ba772f-28f4-4f9c-bcd4-9c91f3a85256 823 0 2025-09-16 04:40:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-59 coredns-668d6bf9bc-qvfrd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali020008b3bc4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" Namespace="kube-system" Pod="coredns-668d6bf9bc-qvfrd" WorkloadEndpoint="ip--172--31--31--59-k8s-coredns--668d6bf9bc--qvfrd-" Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.373 [INFO][5239] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" Namespace="kube-system" Pod="coredns-668d6bf9bc-qvfrd" WorkloadEndpoint="ip--172--31--31--59-k8s-coredns--668d6bf9bc--qvfrd-eth0" Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.529 [INFO][5303] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" HandleID="k8s-pod-network.f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" Workload="ip--172--31--31--59-k8s-coredns--668d6bf9bc--qvfrd-eth0" Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.529 [INFO][5303] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" HandleID="k8s-pod-network.f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" Workload="ip--172--31--31--59-k8s-coredns--668d6bf9bc--qvfrd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d340), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-59", "pod":"coredns-668d6bf9bc-qvfrd", "timestamp":"2025-09-16 04:40:56.529191718 +0000 UTC"}, Hostname:"ip-172-31-31-59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.529 [INFO][5303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.529 [INFO][5303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.529 [INFO][5303] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-59' Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.585 [INFO][5303] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" host="ip-172-31-31-59" Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.601 [INFO][5303] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-59" Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.618 [INFO][5303] ipam/ipam.go 511: Trying affinity for 192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.624 [INFO][5303] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.642 [INFO][5303] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.644 [INFO][5303] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" host="ip-172-31-31-59" Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.654 [INFO][5303] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05 Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.673 [INFO][5303] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" host="ip-172-31-31-59" Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.706 [INFO][5303] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.36.71/26] block=192.168.36.64/26 handle="k8s-pod-network.f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" host="ip-172-31-31-59" Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.706 [INFO][5303] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.71/26] handle="k8s-pod-network.f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" host="ip-172-31-31-59" Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.706 [INFO][5303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:40:56.802274 containerd[2009]: 2025-09-16 04:40:56.706 [INFO][5303] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.71/26] IPv6=[] ContainerID="f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" HandleID="k8s-pod-network.f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" Workload="ip--172--31--31--59-k8s-coredns--668d6bf9bc--qvfrd-eth0" Sep 16 04:40:56.803439 containerd[2009]: 2025-09-16 04:40:56.720 [INFO][5239] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" Namespace="kube-system" Pod="coredns-668d6bf9bc-qvfrd" WorkloadEndpoint="ip--172--31--31--59-k8s-coredns--668d6bf9bc--qvfrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--59-k8s-coredns--668d6bf9bc--qvfrd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"58ba772f-28f4-4f9c-bcd4-9c91f3a85256", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 40, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-59", ContainerID:"", Pod:"coredns-668d6bf9bc-qvfrd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali020008b3bc4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:40:56.803439 containerd[2009]: 2025-09-16 04:40:56.721 [INFO][5239] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.71/32] ContainerID="f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" Namespace="kube-system" Pod="coredns-668d6bf9bc-qvfrd" WorkloadEndpoint="ip--172--31--31--59-k8s-coredns--668d6bf9bc--qvfrd-eth0" Sep 16 04:40:56.803439 containerd[2009]: 2025-09-16 04:40:56.721 [INFO][5239] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali020008b3bc4 ContainerID="f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" Namespace="kube-system" Pod="coredns-668d6bf9bc-qvfrd" WorkloadEndpoint="ip--172--31--31--59-k8s-coredns--668d6bf9bc--qvfrd-eth0" Sep 16 04:40:56.803439 containerd[2009]: 2025-09-16 04:40:56.756 [INFO][5239] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" Namespace="kube-system" Pod="coredns-668d6bf9bc-qvfrd" WorkloadEndpoint="ip--172--31--31--59-k8s-coredns--668d6bf9bc--qvfrd-eth0" Sep 16 04:40:56.803439 containerd[2009]: 2025-09-16 04:40:56.756 [INFO][5239] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" Namespace="kube-system" Pod="coredns-668d6bf9bc-qvfrd" WorkloadEndpoint="ip--172--31--31--59-k8s-coredns--668d6bf9bc--qvfrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--59-k8s-coredns--668d6bf9bc--qvfrd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"58ba772f-28f4-4f9c-bcd4-9c91f3a85256", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 40, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-59", ContainerID:"f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05", Pod:"coredns-668d6bf9bc-qvfrd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali020008b3bc4", MAC:"0a:f5:11:39:f1:3e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:40:56.803439 containerd[2009]: 2025-09-16 04:40:56.794 [INFO][5239] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" Namespace="kube-system" Pod="coredns-668d6bf9bc-qvfrd" WorkloadEndpoint="ip--172--31--31--59-k8s-coredns--668d6bf9bc--qvfrd-eth0" Sep 16 04:40:56.905694 containerd[2009]: time="2025-09-16T04:40:56.905180592Z" level=info msg="connecting to shim f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05" address="unix:///run/containerd/s/5a345e707616d19001e535cc319309b8c4b68a1fab883f33529656ef08ce3543" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:40:56.974267 containerd[2009]: time="2025-09-16T04:40:56.974196948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7778b7c887-7fncf,Uid:a2a9dd72-8738-47f4-85d6-1505ef8e60dc,Namespace:calico-system,Attempt:0,}" Sep 16 04:40:57.052212 systemd[1]: Started cri-containerd-f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05.scope - libcontainer container f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05. Sep 16 04:40:57.227799 systemd-networkd[1898]: cali9b039c0f376: Gained IPv6LL Sep 16 04:40:57.405757 containerd[2009]: time="2025-09-16T04:40:57.404542318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qvfrd,Uid:58ba772f-28f4-4f9c-bcd4-9c91f3a85256,Namespace:kube-system,Attempt:0,} returns sandbox id \"f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05\"" Sep 16 04:40:57.421174 containerd[2009]: time="2025-09-16T04:40:57.420179614Z" level=info msg="CreateContainer within sandbox \"f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:40:57.581117 containerd[2009]: time="2025-09-16T04:40:57.580268111Z" level=info msg="Container 5c563f3cdaa9655c0cbc4d97430721424143f78ce18868eaf57afe765f67e5ee: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:40:57.593620 kubelet[3330]: I0916 04:40:57.593522 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6j7l5" podStartSLOduration=53.593408651 podStartE2EDuration="53.593408651s" podCreationTimestamp="2025-09-16 04:40:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:40:57.591951491 +0000 UTC m=+59.866341286" watchObservedRunningTime="2025-09-16 04:40:57.593408651 +0000 UTC m=+59.867798422" Sep 16 04:40:57.605092 containerd[2009]: time="2025-09-16T04:40:57.603696311Z" level=info msg="CreateContainer within sandbox \"f68937901ce559c1b5e750e3a104a9b5f8301a8ca721e45296a4a06fcf18bf05\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c563f3cdaa9655c0cbc4d97430721424143f78ce18868eaf57afe765f67e5ee\"" Sep 16 04:40:57.606707 containerd[2009]: time="2025-09-16T04:40:57.605591699Z" level=info msg="StartContainer for \"5c563f3cdaa9655c0cbc4d97430721424143f78ce18868eaf57afe765f67e5ee\"" Sep 16 04:40:57.609803 containerd[2009]: time="2025-09-16T04:40:57.609487451Z" level=info msg="connecting to shim 5c563f3cdaa9655c0cbc4d97430721424143f78ce18868eaf57afe765f67e5ee" address="unix:///run/containerd/s/5a345e707616d19001e535cc319309b8c4b68a1fab883f33529656ef08ce3543" protocol=ttrpc version=3 Sep 16 04:40:57.673952 systemd-networkd[1898]: vxlan.calico: Gained IPv6LL Sep 16 04:40:57.759491 systemd[1]: Started cri-containerd-5c563f3cdaa9655c0cbc4d97430721424143f78ce18868eaf57afe765f67e5ee.scope - libcontainer container 5c563f3cdaa9655c0cbc4d97430721424143f78ce18868eaf57afe765f67e5ee. Sep 16 04:40:57.940604 systemd-networkd[1898]: cali7555c49e6b2: Link UP Sep 16 04:40:57.948134 systemd-networkd[1898]: cali7555c49e6b2: Gained carrier Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.290 [INFO][5383] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--59-k8s-calico--kube--controllers--7778b7c887--7fncf-eth0 calico-kube-controllers-7778b7c887- calico-system a2a9dd72-8738-47f4-85d6-1505ef8e60dc 828 0 2025-09-16 04:40:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7778b7c887 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-31-59 calico-kube-controllers-7778b7c887-7fncf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7555c49e6b2 [] [] }} ContainerID="8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" Namespace="calico-system" Pod="calico-kube-controllers-7778b7c887-7fncf" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--kube--controllers--7778b7c887--7fncf-" Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.290 [INFO][5383] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" Namespace="calico-system" Pod="calico-kube-controllers-7778b7c887-7fncf" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--kube--controllers--7778b7c887--7fncf-eth0" Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.631 [INFO][5410] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" HandleID="k8s-pod-network.8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" Workload="ip--172--31--31--59-k8s-calico--kube--controllers--7778b7c887--7fncf-eth0" Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.632 [INFO][5410] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" HandleID="k8s-pod-network.8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" Workload="ip--172--31--31--59-k8s-calico--kube--controllers--7778b7c887--7fncf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031e380), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-59", "pod":"calico-kube-controllers-7778b7c887-7fncf", "timestamp":"2025-09-16 04:40:57.630979787 +0000 UTC"}, Hostname:"ip-172-31-31-59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.633 [INFO][5410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.633 [INFO][5410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.633 [INFO][5410] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-59' Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.690 [INFO][5410] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" host="ip-172-31-31-59" Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.725 [INFO][5410] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-59" Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.767 [INFO][5410] ipam/ipam.go 511: Trying affinity for 192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.786 [INFO][5410] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.802 [INFO][5410] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ip-172-31-31-59" Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.802 [INFO][5410] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" host="ip-172-31-31-59" Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.812 [INFO][5410] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.844 [INFO][5410] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" host="ip-172-31-31-59" Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.887 [INFO][5410] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.36.72/26] block=192.168.36.64/26 handle="k8s-pod-network.8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" host="ip-172-31-31-59" Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.887 [INFO][5410] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.72/26] handle="k8s-pod-network.8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" host="ip-172-31-31-59" Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.887 [INFO][5410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:40:58.041698 containerd[2009]: 2025-09-16 04:40:57.887 [INFO][5410] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.72/26] IPv6=[] ContainerID="8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" HandleID="k8s-pod-network.8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" Workload="ip--172--31--31--59-k8s-calico--kube--controllers--7778b7c887--7fncf-eth0" Sep 16 04:40:58.047072 containerd[2009]: 2025-09-16 04:40:57.912 [INFO][5383] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" Namespace="calico-system" Pod="calico-kube-controllers-7778b7c887-7fncf" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--kube--controllers--7778b7c887--7fncf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--59-k8s-calico--kube--controllers--7778b7c887--7fncf-eth0", GenerateName:"calico-kube-controllers-7778b7c887-", Namespace:"calico-system", SelfLink:"", UID:"a2a9dd72-8738-47f4-85d6-1505ef8e60dc", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 40, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7778b7c887", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-59", ContainerID:"", Pod:"calico-kube-controllers-7778b7c887-7fncf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7555c49e6b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:40:58.047072 containerd[2009]: 2025-09-16 04:40:57.913 [INFO][5383] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.72/32] ContainerID="8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" Namespace="calico-system" Pod="calico-kube-controllers-7778b7c887-7fncf" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--kube--controllers--7778b7c887--7fncf-eth0" Sep 16 04:40:58.047072 containerd[2009]: 2025-09-16 04:40:57.913 [INFO][5383] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7555c49e6b2 ContainerID="8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" Namespace="calico-system" Pod="calico-kube-controllers-7778b7c887-7fncf" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--kube--controllers--7778b7c887--7fncf-eth0" Sep 16 04:40:58.047072 containerd[2009]: 2025-09-16 04:40:57.959 [INFO][5383] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" Namespace="calico-system" Pod="calico-kube-controllers-7778b7c887-7fncf" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--kube--controllers--7778b7c887--7fncf-eth0" Sep 16 04:40:58.047072 containerd[2009]: 2025-09-16 04:40:57.981 [INFO][5383] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" Namespace="calico-system" Pod="calico-kube-controllers-7778b7c887-7fncf" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--kube--controllers--7778b7c887--7fncf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--59-k8s-calico--kube--controllers--7778b7c887--7fncf-eth0", GenerateName:"calico-kube-controllers-7778b7c887-", Namespace:"calico-system", SelfLink:"", UID:"a2a9dd72-8738-47f4-85d6-1505ef8e60dc", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 40, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7778b7c887", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-59", ContainerID:"8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b", Pod:"calico-kube-controllers-7778b7c887-7fncf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7555c49e6b2", MAC:"e6:68:3f:da:0f:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:40:58.047072 containerd[2009]: 2025-09-16 04:40:58.025 [INFO][5383] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" Namespace="calico-system" Pod="calico-kube-controllers-7778b7c887-7fncf" WorkloadEndpoint="ip--172--31--31--59-k8s-calico--kube--controllers--7778b7c887--7fncf-eth0" Sep 16 04:40:58.076546 containerd[2009]: time="2025-09-16T04:40:58.076304793Z" level=info msg="StartContainer for \"5c563f3cdaa9655c0cbc4d97430721424143f78ce18868eaf57afe765f67e5ee\" returns successfully" Sep 16 04:40:58.222157 containerd[2009]: time="2025-09-16T04:40:58.220466770Z" level=info msg="connecting to shim 8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b" address="unix:///run/containerd/s/d378695a193822e575236801e5cf062f83773a9092758e2f7e6ac0bca2956c49" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:40:58.312297 systemd[1]: Started cri-containerd-8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b.scope - libcontainer container 8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b. Sep 16 04:40:58.370792 systemd[1]: Started sshd@8-172.31.31.59:22-147.75.109.163:32836.service - OpenSSH per-connection server daemon (147.75.109.163:32836). Sep 16 04:40:58.581405 containerd[2009]: time="2025-09-16T04:40:58.581284776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7778b7c887-7fncf,Uid:a2a9dd72-8738-47f4-85d6-1505ef8e60dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b\"" Sep 16 04:40:58.603555 kubelet[3330]: I0916 04:40:58.602294 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qvfrd" podStartSLOduration=54.602271552 podStartE2EDuration="54.602271552s" podCreationTimestamp="2025-09-16 04:40:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:40:58.602250252 +0000 UTC m=+60.876640047" watchObservedRunningTime="2025-09-16 04:40:58.602271552 +0000 UTC m=+60.876661323" Sep 16 04:40:58.634856 systemd-networkd[1898]: cali020008b3bc4: Gained IPv6LL Sep 16 04:40:58.654942 sshd[5541]: Accepted publickey for core from 147.75.109.163 port 32836 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:40:58.659622 sshd-session[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:40:58.673903 systemd-logind[1982]: New session 9 of user core. Sep 16 04:40:58.679001 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 04:40:59.093928 sshd[5552]: Connection closed by 147.75.109.163 port 32836 Sep 16 04:40:59.095153 sshd-session[5541]: pam_unix(sshd:session): session closed for user core Sep 16 04:40:59.106024 systemd[1]: sshd@8-172.31.31.59:22-147.75.109.163:32836.service: Deactivated successfully. Sep 16 04:40:59.113898 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 04:40:59.119345 systemd-logind[1982]: Session 9 logged out. Waiting for processes to exit. Sep 16 04:40:59.124206 systemd-logind[1982]: Removed session 9. Sep 16 04:40:59.454017 containerd[2009]: time="2025-09-16T04:40:59.453859668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:59.456444 containerd[2009]: time="2025-09-16T04:40:59.456371292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 16 04:40:59.458814 containerd[2009]: time="2025-09-16T04:40:59.458736888Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:59.465585 containerd[2009]: time="2025-09-16T04:40:59.465512136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:40:59.468457 containerd[2009]: time="2025-09-16T04:40:59.468384948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 5.926066253s" Sep 16 04:40:59.468457 containerd[2009]: time="2025-09-16T04:40:59.468446184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 16 04:40:59.469877 containerd[2009]: time="2025-09-16T04:40:59.469802460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 16 04:40:59.474947 containerd[2009]: time="2025-09-16T04:40:59.474830808Z" level=info msg="CreateContainer within sandbox \"e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:40:59.493793 containerd[2009]: time="2025-09-16T04:40:59.493718964Z" level=info msg="Container 4b48f219d687c43d9543cbf2e376a452725fc287f6b4a810e7379e9088fa9696: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:40:59.506440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount998372681.mount: Deactivated successfully. Sep 16 04:40:59.532413 containerd[2009]: time="2025-09-16T04:40:59.532190125Z" level=info msg="CreateContainer within sandbox \"e914872416f02f712c331a0050fb8579805efe4e0e558ceb97b2a7aa820a7242\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4b48f219d687c43d9543cbf2e376a452725fc287f6b4a810e7379e9088fa9696\"" Sep 16 04:40:59.533706 containerd[2009]: time="2025-09-16T04:40:59.533449645Z" level=info msg="StartContainer for \"4b48f219d687c43d9543cbf2e376a452725fc287f6b4a810e7379e9088fa9696\"" Sep 16 04:40:59.536280 containerd[2009]: time="2025-09-16T04:40:59.536223313Z" level=info msg="connecting to shim 4b48f219d687c43d9543cbf2e376a452725fc287f6b4a810e7379e9088fa9696" address="unix:///run/containerd/s/fcd45ec654fcd4e05d9a19322f5c36f2dcdffc6625f246630f1b6db24b9b4ee6" protocol=ttrpc version=3 Sep 16 04:40:59.593933 systemd-networkd[1898]: cali7555c49e6b2: Gained IPv6LL Sep 16 04:40:59.600061 systemd[1]: Started cri-containerd-4b48f219d687c43d9543cbf2e376a452725fc287f6b4a810e7379e9088fa9696.scope - libcontainer container 4b48f219d687c43d9543cbf2e376a452725fc287f6b4a810e7379e9088fa9696. Sep 16 04:40:59.730863 containerd[2009]: time="2025-09-16T04:40:59.730188290Z" level=info msg="StartContainer for \"4b48f219d687c43d9543cbf2e376a452725fc287f6b4a810e7379e9088fa9696\" returns successfully" Sep 16 04:41:01.602764 kubelet[3330]: I0916 04:41:01.602685 3330 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:41:02.099486 ntpd[2210]: Listen normally on 6 vxlan.calico 192.168.36.64:123 Sep 16 04:41:02.100148 ntpd[2210]: 16 Sep 04:41:02 ntpd[2210]: Listen normally on 6 vxlan.calico 192.168.36.64:123 Sep 16 04:41:02.100148 ntpd[2210]: 16 Sep 04:41:02 ntpd[2210]: Listen normally on 7 calicf66b4c3eaf [fe80::ecee:eeff:feee:eeee%4]:123 Sep 16 04:41:02.100148 ntpd[2210]: 16 Sep 04:41:02 ntpd[2210]: Listen normally on 8 cali05b63e36290 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 16 04:41:02.100148 ntpd[2210]: 16 Sep 04:41:02 ntpd[2210]: Listen normally on 9 cali75c314ef343 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 16 04:41:02.100148 ntpd[2210]: 16 Sep 04:41:02 ntpd[2210]: Listen normally on 10 cali8517cabe7be [fe80::ecee:eeff:feee:eeee%7]:123 Sep 16 04:41:02.100148 ntpd[2210]: 16 Sep 04:41:02 ntpd[2210]: Listen normally on 11 cali2c2ee6119b4 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 16 04:41:02.100148 ntpd[2210]: 16 Sep 04:41:02 ntpd[2210]: Listen normally on 12 cali9b039c0f376 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 16 04:41:02.099677 ntpd[2210]: Listen normally on 7 calicf66b4c3eaf [fe80::ecee:eeff:feee:eeee%4]:123 Sep 16 04:41:02.100548 ntpd[2210]: 16 Sep 04:41:02 ntpd[2210]: Listen normally on 13 vxlan.calico [fe80::64f3:62ff:fe0c:56c2%10]:123 Sep 16 04:41:02.100548 ntpd[2210]: 16 Sep 04:41:02 ntpd[2210]: Listen normally on 14 cali020008b3bc4 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 16 04:41:02.100548 ntpd[2210]: 16 Sep 04:41:02 ntpd[2210]: Listen normally on 15 cali7555c49e6b2 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 16 04:41:02.099755 ntpd[2210]: Listen normally on 8 cali05b63e36290 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 16 04:41:02.099831 ntpd[2210]: Listen normally on 9 cali75c314ef343 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 16 04:41:02.099906 ntpd[2210]: Listen normally on 10 cali8517cabe7be [fe80::ecee:eeff:feee:eeee%7]:123 Sep 16 04:41:02.099957 ntpd[2210]: Listen normally on 11 cali2c2ee6119b4 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 16 04:41:02.100035 ntpd[2210]: Listen normally on 12 cali9b039c0f376 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 16 04:41:02.100174 ntpd[2210]: Listen normally on 13 vxlan.calico [fe80::64f3:62ff:fe0c:56c2%10]:123 Sep 16 04:41:02.100221 ntpd[2210]: Listen normally on 14 cali020008b3bc4 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 16 04:41:02.100265 ntpd[2210]: Listen normally on 15 cali7555c49e6b2 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 16 04:41:03.163355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount663285709.mount: Deactivated successfully. Sep 16 04:41:03.189439 containerd[2009]: time="2025-09-16T04:41:03.189347691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:03.192138 containerd[2009]: time="2025-09-16T04:41:03.191902395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 16 04:41:03.193538 containerd[2009]: time="2025-09-16T04:41:03.193397139Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:03.197604 containerd[2009]: time="2025-09-16T04:41:03.197405427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:03.200685 containerd[2009]: time="2025-09-16T04:41:03.200600367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 3.730731007s" Sep 16 04:41:03.200685 containerd[2009]: time="2025-09-16T04:41:03.200688315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 16 04:41:03.204862 containerd[2009]: time="2025-09-16T04:41:03.204696843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 16 04:41:03.210283 containerd[2009]: time="2025-09-16T04:41:03.209881095Z" level=info msg="CreateContainer within sandbox \"134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 16 04:41:03.229893 containerd[2009]: time="2025-09-16T04:41:03.229806135Z" level=info msg="Container c65756ebeba2379ae62ff88b5724048d705f72d390fb102365c214cb8494dcca: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:41:03.249232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount546436938.mount: Deactivated successfully. Sep 16 04:41:03.263465 containerd[2009]: time="2025-09-16T04:41:03.263291655Z" level=info msg="CreateContainer within sandbox \"134910afd367f61227c96d20df94e57d6e8fedc93c5dfae44bc4c44c507d520e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c65756ebeba2379ae62ff88b5724048d705f72d390fb102365c214cb8494dcca\"" Sep 16 04:41:03.266410 containerd[2009]: time="2025-09-16T04:41:03.266348979Z" level=info msg="StartContainer for \"c65756ebeba2379ae62ff88b5724048d705f72d390fb102365c214cb8494dcca\"" Sep 16 04:41:03.269603 containerd[2009]: time="2025-09-16T04:41:03.269509491Z" level=info msg="connecting to shim c65756ebeba2379ae62ff88b5724048d705f72d390fb102365c214cb8494dcca" address="unix:///run/containerd/s/a1fba73a3881d4e7c305fe72b14c35e7f8b046e50c817142060d82226956d4a4" protocol=ttrpc version=3 Sep 16 04:41:03.399487 systemd[1]: Started cri-containerd-c65756ebeba2379ae62ff88b5724048d705f72d390fb102365c214cb8494dcca.scope - libcontainer container c65756ebeba2379ae62ff88b5724048d705f72d390fb102365c214cb8494dcca. Sep 16 04:41:03.699118 containerd[2009]: time="2025-09-16T04:41:03.699048269Z" level=info msg="StartContainer for \"c65756ebeba2379ae62ff88b5724048d705f72d390fb102365c214cb8494dcca\" returns successfully" Sep 16 04:41:04.138372 systemd[1]: Started sshd@9-172.31.31.59:22-147.75.109.163:35148.service - OpenSSH per-connection server daemon (147.75.109.163:35148). Sep 16 04:41:04.349040 sshd[5663]: Accepted publickey for core from 147.75.109.163 port 35148 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:41:04.352024 sshd-session[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:04.365753 systemd-logind[1982]: New session 10 of user core. Sep 16 04:41:04.372965 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 04:41:04.674785 kubelet[3330]: I0916 04:41:04.672672 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58c96c794b-2psjs" podStartSLOduration=38.737623009 podStartE2EDuration="44.672624198s" podCreationTimestamp="2025-09-16 04:40:20 +0000 UTC" firstStartedPulling="2025-09-16 04:40:53.534600715 +0000 UTC m=+55.808990474" lastFinishedPulling="2025-09-16 04:40:59.469601904 +0000 UTC m=+61.743991663" observedRunningTime="2025-09-16 04:41:00.627883994 +0000 UTC m=+62.902273789" watchObservedRunningTime="2025-09-16 04:41:04.672624198 +0000 UTC m=+66.947013969" Sep 16 04:41:04.676000 kubelet[3330]: I0916 04:41:04.675682 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-ccf47dddf-hbcpq" podStartSLOduration=2.929268691 podStartE2EDuration="14.675613794s" podCreationTimestamp="2025-09-16 04:40:50 +0000 UTC" firstStartedPulling="2025-09-16 04:40:51.456554848 +0000 UTC m=+53.730944619" lastFinishedPulling="2025-09-16 04:41:03.202899867 +0000 UTC m=+65.477289722" observedRunningTime="2025-09-16 04:41:04.667046706 +0000 UTC m=+66.941436513" watchObservedRunningTime="2025-09-16 04:41:04.675613794 +0000 UTC m=+66.950003577" Sep 16 04:41:04.719679 sshd[5666]: Connection closed by 147.75.109.163 port 35148 Sep 16 04:41:04.718483 sshd-session[5663]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:04.728557 systemd[1]: sshd@9-172.31.31.59:22-147.75.109.163:35148.service: Deactivated successfully. Sep 16 04:41:04.735379 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 04:41:04.743324 systemd-logind[1982]: Session 10 logged out. Waiting for processes to exit. Sep 16 04:41:04.767125 systemd[1]: Started sshd@10-172.31.31.59:22-147.75.109.163:35162.service - OpenSSH per-connection server daemon (147.75.109.163:35162). Sep 16 04:41:04.769004 systemd-logind[1982]: Removed session 10. Sep 16 04:41:04.965514 sshd[5684]: Accepted publickey for core from 147.75.109.163 port 35162 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:41:04.968611 sshd-session[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:04.979336 systemd-logind[1982]: New session 11 of user core. Sep 16 04:41:04.992915 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 04:41:05.367574 sshd[5687]: Connection closed by 147.75.109.163 port 35162 Sep 16 04:41:05.368847 sshd-session[5684]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:05.381890 systemd[1]: sshd@10-172.31.31.59:22-147.75.109.163:35162.service: Deactivated successfully. Sep 16 04:41:05.382652 systemd-logind[1982]: Session 11 logged out. Waiting for processes to exit. Sep 16 04:41:05.392182 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 04:41:05.424451 systemd-logind[1982]: Removed session 11. Sep 16 04:41:05.427436 systemd[1]: Started sshd@11-172.31.31.59:22-147.75.109.163:35172.service - OpenSSH per-connection server daemon (147.75.109.163:35172). Sep 16 04:41:05.629676 sshd[5697]: Accepted publickey for core from 147.75.109.163 port 35172 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:41:05.632579 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:05.644182 systemd-logind[1982]: New session 12 of user core. Sep 16 04:41:05.653141 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 04:41:05.990908 sshd[5700]: Connection closed by 147.75.109.163 port 35172 Sep 16 04:41:05.993721 sshd-session[5697]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:06.003336 systemd[1]: sshd@11-172.31.31.59:22-147.75.109.163:35172.service: Deactivated successfully. Sep 16 04:41:06.004035 systemd-logind[1982]: Session 12 logged out. Waiting for processes to exit. Sep 16 04:41:06.009556 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 04:41:06.016161 systemd-logind[1982]: Removed session 12. Sep 16 04:41:08.305213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2103717297.mount: Deactivated successfully. Sep 16 04:41:09.447242 containerd[2009]: time="2025-09-16T04:41:09.447159742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:09.452025 containerd[2009]: time="2025-09-16T04:41:09.451942462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 16 04:41:09.453620 containerd[2009]: time="2025-09-16T04:41:09.453557878Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:09.461263 containerd[2009]: time="2025-09-16T04:41:09.461186758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:09.464268 containerd[2009]: time="2025-09-16T04:41:09.464029198Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 6.259007239s" Sep 16 04:41:09.464268 containerd[2009]: time="2025-09-16T04:41:09.464086426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 16 04:41:09.466892 containerd[2009]: time="2025-09-16T04:41:09.466810510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 16 04:41:09.468211 containerd[2009]: time="2025-09-16T04:41:09.468151042Z" level=info msg="CreateContainer within sandbox \"755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 16 04:41:09.485433 containerd[2009]: time="2025-09-16T04:41:09.485381734Z" level=info msg="Container 531bd4a65cc8cd59378d78fd073047cd201ae70da4f062e3156829ded6a6ae85: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:41:09.506126 containerd[2009]: time="2025-09-16T04:41:09.506074738Z" level=info msg="CreateContainer within sandbox \"755dad3f57be9c8522a3f7772819a448d2ee630823c22f3532069c2e1f84f7fe\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"531bd4a65cc8cd59378d78fd073047cd201ae70da4f062e3156829ded6a6ae85\"" Sep 16 04:41:09.507401 containerd[2009]: time="2025-09-16T04:41:09.507157906Z" level=info msg="StartContainer for \"531bd4a65cc8cd59378d78fd073047cd201ae70da4f062e3156829ded6a6ae85\"" Sep 16 04:41:09.511863 containerd[2009]: time="2025-09-16T04:41:09.511742722Z" level=info msg="connecting to shim 531bd4a65cc8cd59378d78fd073047cd201ae70da4f062e3156829ded6a6ae85" address="unix:///run/containerd/s/c48af64d0bc6e3897a2712514824540fd79fb91c060e896d977efd2930fad254" protocol=ttrpc version=3 Sep 16 04:41:09.548122 systemd[1]: Started cri-containerd-531bd4a65cc8cd59378d78fd073047cd201ae70da4f062e3156829ded6a6ae85.scope - libcontainer container 531bd4a65cc8cd59378d78fd073047cd201ae70da4f062e3156829ded6a6ae85. Sep 16 04:41:09.634710 containerd[2009]: time="2025-09-16T04:41:09.634598915Z" level=info msg="StartContainer for \"531bd4a65cc8cd59378d78fd073047cd201ae70da4f062e3156829ded6a6ae85\" returns successfully" Sep 16 04:41:09.695787 kubelet[3330]: I0916 04:41:09.695328 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-9bw49" podStartSLOduration=24.577443944 podStartE2EDuration="39.695304251s" podCreationTimestamp="2025-09-16 04:40:30 +0000 UTC" firstStartedPulling="2025-09-16 04:40:54.347805499 +0000 UTC m=+56.622195258" lastFinishedPulling="2025-09-16 04:41:09.465665794 +0000 UTC m=+71.740055565" observedRunningTime="2025-09-16 04:41:09.692884259 +0000 UTC m=+71.967274042" watchObservedRunningTime="2025-09-16 04:41:09.695304251 +0000 UTC m=+71.969694022" Sep 16 04:41:09.772406 kubelet[3330]: I0916 04:41:09.771958 3330 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:41:10.100670 containerd[2009]: time="2025-09-16T04:41:10.100291917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"531bd4a65cc8cd59378d78fd073047cd201ae70da4f062e3156829ded6a6ae85\" id:\"d396b9e19ac4bda86cb25c4fe5c1891a694ec24747c6bb2ee6b92f878a498b57\" pid:5779 exit_status:1 exited_at:{seconds:1757997670 nanos:99794565}" Sep 16 04:41:10.811334 containerd[2009]: time="2025-09-16T04:41:10.811268161Z" level=info msg="TaskExit event in podsandbox handler container_id:\"531bd4a65cc8cd59378d78fd073047cd201ae70da4f062e3156829ded6a6ae85\" id:\"1eef6c689443de8e114869d1cc38eafe650af1a7845c1a30c11f4458e9a7991a\" pid:5806 exit_status:1 exited_at:{seconds:1757997670 nanos:809849761}" Sep 16 04:41:11.032842 systemd[1]: Started sshd@12-172.31.31.59:22-147.75.109.163:34160.service - OpenSSH per-connection server daemon (147.75.109.163:34160). Sep 16 04:41:11.256755 sshd[5819]: Accepted publickey for core from 147.75.109.163 port 34160 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:41:11.259464 sshd-session[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:11.269103 systemd-logind[1982]: New session 13 of user core. Sep 16 04:41:11.277022 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 04:41:11.572667 sshd[5822]: Connection closed by 147.75.109.163 port 34160 Sep 16 04:41:11.572347 sshd-session[5819]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:11.582586 systemd[1]: sshd@12-172.31.31.59:22-147.75.109.163:34160.service: Deactivated successfully. Sep 16 04:41:11.588840 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 04:41:11.591014 systemd-logind[1982]: Session 13 logged out. Waiting for processes to exit. Sep 16 04:41:11.595181 systemd-logind[1982]: Removed session 13. Sep 16 04:41:12.699812 containerd[2009]: time="2025-09-16T04:41:12.699697946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:12.707067 containerd[2009]: time="2025-09-16T04:41:12.706963538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 16 04:41:12.712428 containerd[2009]: time="2025-09-16T04:41:12.712274330Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:12.721769 containerd[2009]: time="2025-09-16T04:41:12.721417838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:12.724132 containerd[2009]: time="2025-09-16T04:41:12.723929582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 3.257055244s" Sep 16 04:41:12.724132 containerd[2009]: time="2025-09-16T04:41:12.723989210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 16 04:41:12.727183 containerd[2009]: time="2025-09-16T04:41:12.727058726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 04:41:12.731882 containerd[2009]: time="2025-09-16T04:41:12.731799842Z" level=info msg="CreateContainer within sandbox \"02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 16 04:41:12.762667 containerd[2009]: time="2025-09-16T04:41:12.760362530Z" level=info msg="Container 90c2c73f64db80132ab096ce2f4630db58bd0cbabbcf5275522e811368e4dae6: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:41:12.777428 containerd[2009]: time="2025-09-16T04:41:12.777308894Z" level=info msg="CreateContainer within sandbox \"02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"90c2c73f64db80132ab096ce2f4630db58bd0cbabbcf5275522e811368e4dae6\"" Sep 16 04:41:12.778769 containerd[2009]: time="2025-09-16T04:41:12.778439858Z" level=info msg="StartContainer for \"90c2c73f64db80132ab096ce2f4630db58bd0cbabbcf5275522e811368e4dae6\"" Sep 16 04:41:12.781980 containerd[2009]: time="2025-09-16T04:41:12.781930154Z" level=info msg="connecting to shim 90c2c73f64db80132ab096ce2f4630db58bd0cbabbcf5275522e811368e4dae6" address="unix:///run/containerd/s/ee827644c3236b54c76128062ef07b43661ae83f52a273acf1ba2c360606da61" protocol=ttrpc version=3 Sep 16 04:41:12.832992 systemd[1]: Started cri-containerd-90c2c73f64db80132ab096ce2f4630db58bd0cbabbcf5275522e811368e4dae6.scope - libcontainer container 90c2c73f64db80132ab096ce2f4630db58bd0cbabbcf5275522e811368e4dae6. Sep 16 04:41:12.916550 containerd[2009]: time="2025-09-16T04:41:12.916494231Z" level=info msg="StartContainer for \"90c2c73f64db80132ab096ce2f4630db58bd0cbabbcf5275522e811368e4dae6\" returns successfully" Sep 16 04:41:13.099225 containerd[2009]: time="2025-09-16T04:41:13.098991060Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:13.100139 containerd[2009]: time="2025-09-16T04:41:13.100102332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 16 04:41:13.105388 containerd[2009]: time="2025-09-16T04:41:13.105255552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 378.138998ms" Sep 16 04:41:13.105388 containerd[2009]: time="2025-09-16T04:41:13.105331788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 16 04:41:13.109519 containerd[2009]: time="2025-09-16T04:41:13.109168512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 16 04:41:13.110431 containerd[2009]: time="2025-09-16T04:41:13.110385840Z" level=info msg="CreateContainer within sandbox \"4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:41:13.124685 containerd[2009]: time="2025-09-16T04:41:13.123727644Z" level=info msg="Container 3282b6e1a6b7bba6318ceeec111e488dd5bd33774c28c467a973875d204f5293: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:41:13.139085 containerd[2009]: time="2025-09-16T04:41:13.139030656Z" level=info msg="CreateContainer within sandbox \"4371e06c1a20dd8ece512e4beb39f468e87e182756ed628d11053110e247bcd4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3282b6e1a6b7bba6318ceeec111e488dd5bd33774c28c467a973875d204f5293\"" Sep 16 04:41:13.140712 containerd[2009]: time="2025-09-16T04:41:13.140081820Z" level=info msg="StartContainer for \"3282b6e1a6b7bba6318ceeec111e488dd5bd33774c28c467a973875d204f5293\"" Sep 16 04:41:13.144493 containerd[2009]: time="2025-09-16T04:41:13.143466456Z" level=info msg="connecting to shim 3282b6e1a6b7bba6318ceeec111e488dd5bd33774c28c467a973875d204f5293" address="unix:///run/containerd/s/094c5c594d6f6296fd51d5592f007efaa16cf22510a6537489d5674db411c233" protocol=ttrpc version=3 Sep 16 04:41:13.186981 systemd[1]: Started cri-containerd-3282b6e1a6b7bba6318ceeec111e488dd5bd33774c28c467a973875d204f5293.scope - libcontainer container 3282b6e1a6b7bba6318ceeec111e488dd5bd33774c28c467a973875d204f5293. Sep 16 04:41:13.288509 containerd[2009]: time="2025-09-16T04:41:13.288440425Z" level=info msg="StartContainer for \"3282b6e1a6b7bba6318ceeec111e488dd5bd33774c28c467a973875d204f5293\" returns successfully" Sep 16 04:41:13.712118 kubelet[3330]: I0916 04:41:13.712008 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58c96c794b-l57qh" podStartSLOduration=36.977624028 podStartE2EDuration="53.711987027s" podCreationTimestamp="2025-09-16 04:40:20 +0000 UTC" firstStartedPulling="2025-09-16 04:40:56.372320793 +0000 UTC m=+58.646710564" lastFinishedPulling="2025-09-16 04:41:13.106683792 +0000 UTC m=+75.381073563" observedRunningTime="2025-09-16 04:41:13.711254739 +0000 UTC m=+75.985644534" watchObservedRunningTime="2025-09-16 04:41:13.711987027 +0000 UTC m=+75.986376798" Sep 16 04:41:14.702408 kubelet[3330]: I0916 04:41:14.702308 3330 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:41:16.616580 systemd[1]: Started sshd@13-172.31.31.59:22-147.75.109.163:34166.service - OpenSSH per-connection server daemon (147.75.109.163:34166). Sep 16 04:41:16.853484 sshd[5909]: Accepted publickey for core from 147.75.109.163 port 34166 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:41:16.857752 sshd-session[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:16.875388 systemd-logind[1982]: New session 14 of user core. Sep 16 04:41:16.883311 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 04:41:16.992975 containerd[2009]: time="2025-09-16T04:41:16.992903323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:16.996489 containerd[2009]: time="2025-09-16T04:41:16.996324199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 16 04:41:17.001288 containerd[2009]: time="2025-09-16T04:41:17.000428919Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:17.010246 containerd[2009]: time="2025-09-16T04:41:17.010176183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:17.013102 containerd[2009]: time="2025-09-16T04:41:17.013014447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 3.903791767s" Sep 16 04:41:17.013102 containerd[2009]: time="2025-09-16T04:41:17.013083735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 16 04:41:17.015368 containerd[2009]: time="2025-09-16T04:41:17.015308163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 16 04:41:17.062029 containerd[2009]: time="2025-09-16T04:41:17.061848496Z" level=info msg="CreateContainer within sandbox \"8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 16 04:41:17.091811 containerd[2009]: time="2025-09-16T04:41:17.091758160Z" level=info msg="Container 5983181c580a48e7a5ae7cc47a7c441cf35d2306174cc7bbb42a96b41a619aab: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:41:17.120115 containerd[2009]: time="2025-09-16T04:41:17.120021532Z" level=info msg="CreateContainer within sandbox \"8dbf7a917e27826e4d71db8033a23d256cf212bd1f99c4c2ed525bf95f714f9b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5983181c580a48e7a5ae7cc47a7c441cf35d2306174cc7bbb42a96b41a619aab\"" Sep 16 04:41:17.121152 containerd[2009]: time="2025-09-16T04:41:17.121108948Z" level=info msg="StartContainer for \"5983181c580a48e7a5ae7cc47a7c441cf35d2306174cc7bbb42a96b41a619aab\"" Sep 16 04:41:17.129287 containerd[2009]: time="2025-09-16T04:41:17.129074452Z" level=info msg="connecting to shim 5983181c580a48e7a5ae7cc47a7c441cf35d2306174cc7bbb42a96b41a619aab" address="unix:///run/containerd/s/d378695a193822e575236801e5cf062f83773a9092758e2f7e6ac0bca2956c49" protocol=ttrpc version=3 Sep 16 04:41:17.178077 systemd[1]: Started cri-containerd-5983181c580a48e7a5ae7cc47a7c441cf35d2306174cc7bbb42a96b41a619aab.scope - libcontainer container 5983181c580a48e7a5ae7cc47a7c441cf35d2306174cc7bbb42a96b41a619aab. Sep 16 04:41:17.329181 sshd[5913]: Connection closed by 147.75.109.163 port 34166 Sep 16 04:41:17.329769 containerd[2009]: time="2025-09-16T04:41:17.329577809Z" level=info msg="StartContainer for \"5983181c580a48e7a5ae7cc47a7c441cf35d2306174cc7bbb42a96b41a619aab\" returns successfully" Sep 16 04:41:17.332378 sshd-session[5909]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:17.345887 systemd[1]: sshd@13-172.31.31.59:22-147.75.109.163:34166.service: Deactivated successfully. Sep 16 04:41:17.345977 systemd-logind[1982]: Session 14 logged out. Waiting for processes to exit. Sep 16 04:41:17.353414 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 04:41:17.367379 systemd-logind[1982]: Removed session 14. Sep 16 04:41:17.765499 kubelet[3330]: I0916 04:41:17.765416 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7778b7c887-7fncf" podStartSLOduration=29.337594872 podStartE2EDuration="47.765390211s" podCreationTimestamp="2025-09-16 04:40:30 +0000 UTC" firstStartedPulling="2025-09-16 04:40:58.586872888 +0000 UTC m=+60.861262659" lastFinishedPulling="2025-09-16 04:41:17.014668227 +0000 UTC m=+79.289057998" observedRunningTime="2025-09-16 04:41:17.760595623 +0000 UTC m=+80.034985394" watchObservedRunningTime="2025-09-16 04:41:17.765390211 +0000 UTC m=+80.039779982" Sep 16 04:41:17.813120 containerd[2009]: time="2025-09-16T04:41:17.813060979Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5983181c580a48e7a5ae7cc47a7c441cf35d2306174cc7bbb42a96b41a619aab\" id:\"4f3ce9e84a201a74289776d79584fc81b69c9c171fdd39045b9b6d6c550193a3\" pid:5981 exited_at:{seconds:1757997677 nanos:812498311}" Sep 16 04:41:19.293554 containerd[2009]: time="2025-09-16T04:41:19.293478331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:19.295363 containerd[2009]: time="2025-09-16T04:41:19.294933187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 16 04:41:19.297230 containerd[2009]: time="2025-09-16T04:41:19.297165955Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:19.301625 containerd[2009]: time="2025-09-16T04:41:19.301571731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:19.302944 containerd[2009]: time="2025-09-16T04:41:19.302863603Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 2.287490868s" Sep 16 04:41:19.303061 containerd[2009]: time="2025-09-16T04:41:19.302995147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 16 04:41:19.309263 containerd[2009]: time="2025-09-16T04:41:19.309183283Z" level=info msg="CreateContainer within sandbox \"02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 16 04:41:19.330130 containerd[2009]: time="2025-09-16T04:41:19.329962831Z" level=info msg="Container 4ba63309c39922e0b0769bc37cbfd6cbd76ab2b3cf2285f4bfe967f4ca4454e9: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:41:19.340999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142295217.mount: Deactivated successfully. Sep 16 04:41:19.355339 containerd[2009]: time="2025-09-16T04:41:19.355253719Z" level=info msg="CreateContainer within sandbox \"02afe3e8558bcecc06d5c22380ff4c08d12d43e052317240d4d6cc2c0534974e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4ba63309c39922e0b0769bc37cbfd6cbd76ab2b3cf2285f4bfe967f4ca4454e9\"" Sep 16 04:41:19.356281 containerd[2009]: time="2025-09-16T04:41:19.356131015Z" level=info msg="StartContainer for \"4ba63309c39922e0b0769bc37cbfd6cbd76ab2b3cf2285f4bfe967f4ca4454e9\"" Sep 16 04:41:19.361025 containerd[2009]: time="2025-09-16T04:41:19.360952027Z" level=info msg="connecting to shim 4ba63309c39922e0b0769bc37cbfd6cbd76ab2b3cf2285f4bfe967f4ca4454e9" address="unix:///run/containerd/s/ee827644c3236b54c76128062ef07b43661ae83f52a273acf1ba2c360606da61" protocol=ttrpc version=3 Sep 16 04:41:19.404011 systemd[1]: Started cri-containerd-4ba63309c39922e0b0769bc37cbfd6cbd76ab2b3cf2285f4bfe967f4ca4454e9.scope - libcontainer container 4ba63309c39922e0b0769bc37cbfd6cbd76ab2b3cf2285f4bfe967f4ca4454e9. Sep 16 04:41:19.500917 containerd[2009]: time="2025-09-16T04:41:19.500820740Z" level=info msg="StartContainer for \"4ba63309c39922e0b0769bc37cbfd6cbd76ab2b3cf2285f4bfe967f4ca4454e9\" returns successfully" Sep 16 04:41:19.760277 kubelet[3330]: I0916 04:41:19.758614 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cp4p4" podStartSLOduration=26.249423787 podStartE2EDuration="50.758587713s" podCreationTimestamp="2025-09-16 04:40:29 +0000 UTC" firstStartedPulling="2025-09-16 04:40:54.795460389 +0000 UTC m=+57.069850148" lastFinishedPulling="2025-09-16 04:41:19.304624315 +0000 UTC m=+81.579014074" observedRunningTime="2025-09-16 04:41:19.756434637 +0000 UTC m=+82.030824408" watchObservedRunningTime="2025-09-16 04:41:19.758587713 +0000 UTC m=+82.032977496" Sep 16 04:41:20.178197 kubelet[3330]: I0916 04:41:20.178134 3330 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 16 04:41:20.178358 kubelet[3330]: I0916 04:41:20.178211 3330 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 16 04:41:21.653686 containerd[2009]: time="2025-09-16T04:41:21.653581690Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fe0c3d289715b04f84c366060a60aa57bc1dd53cd8d63f525b8c8283ac8563c\" id:\"d0283262de6990ac6eb72272ff2075bf4c729b9a13d3b5d58c140dd557ac12c6\" pid:6054 exited_at:{seconds:1757997681 nanos:653129482}" Sep 16 04:41:22.364993 systemd[1]: Started sshd@14-172.31.31.59:22-147.75.109.163:37256.service - OpenSSH per-connection server daemon (147.75.109.163:37256). Sep 16 04:41:22.570753 sshd[6067]: Accepted publickey for core from 147.75.109.163 port 37256 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:41:22.574090 sshd-session[6067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:22.586880 systemd-logind[1982]: New session 15 of user core. Sep 16 04:41:22.590043 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 04:41:22.876701 sshd[6071]: Connection closed by 147.75.109.163 port 37256 Sep 16 04:41:22.877542 sshd-session[6067]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:22.885140 systemd[1]: sshd@14-172.31.31.59:22-147.75.109.163:37256.service: Deactivated successfully. Sep 16 04:41:22.890257 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 04:41:22.893974 systemd-logind[1982]: Session 15 logged out. Waiting for processes to exit. Sep 16 04:41:22.896818 systemd-logind[1982]: Removed session 15. Sep 16 04:41:27.914857 systemd[1]: Started sshd@15-172.31.31.59:22-147.75.109.163:37266.service - OpenSSH per-connection server daemon (147.75.109.163:37266). Sep 16 04:41:28.111767 sshd[6085]: Accepted publickey for core from 147.75.109.163 port 37266 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:41:28.115376 sshd-session[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:28.125487 systemd-logind[1982]: New session 16 of user core. Sep 16 04:41:28.134927 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 04:41:28.424305 sshd[6088]: Connection closed by 147.75.109.163 port 37266 Sep 16 04:41:28.426792 sshd-session[6085]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:28.432302 systemd[1]: sshd@15-172.31.31.59:22-147.75.109.163:37266.service: Deactivated successfully. Sep 16 04:41:28.437442 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 04:41:28.441767 systemd-logind[1982]: Session 16 logged out. Waiting for processes to exit. Sep 16 04:41:28.444892 systemd-logind[1982]: Removed session 16. Sep 16 04:41:28.462064 systemd[1]: Started sshd@16-172.31.31.59:22-147.75.109.163:37278.service - OpenSSH per-connection server daemon (147.75.109.163:37278). Sep 16 04:41:28.661438 sshd[6100]: Accepted publickey for core from 147.75.109.163 port 37278 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:41:28.664369 sshd-session[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:28.672351 systemd-logind[1982]: New session 17 of user core. Sep 16 04:41:28.682893 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 04:41:29.432400 sshd[6103]: Connection closed by 147.75.109.163 port 37278 Sep 16 04:41:29.433933 sshd-session[6100]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:29.440445 systemd-logind[1982]: Session 17 logged out. Waiting for processes to exit. Sep 16 04:41:29.440614 systemd[1]: sshd@16-172.31.31.59:22-147.75.109.163:37278.service: Deactivated successfully. Sep 16 04:41:29.445509 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 04:41:29.450774 systemd-logind[1982]: Removed session 17. Sep 16 04:41:29.468011 systemd[1]: Started sshd@17-172.31.31.59:22-147.75.109.163:37294.service - OpenSSH per-connection server daemon (147.75.109.163:37294). Sep 16 04:41:29.678740 sshd[6113]: Accepted publickey for core from 147.75.109.163 port 37294 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:41:29.681177 sshd-session[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:29.689343 systemd-logind[1982]: New session 18 of user core. Sep 16 04:41:29.701246 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 04:41:30.809005 containerd[2009]: time="2025-09-16T04:41:30.808942700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5983181c580a48e7a5ae7cc47a7c441cf35d2306174cc7bbb42a96b41a619aab\" id:\"0704fe90547d29b62295208820415777ab1f2a204d0ab223d643e927c83c2d99\" pid:6137 exited_at:{seconds:1757997690 nanos:808291304}" Sep 16 04:41:31.173028 sshd[6116]: Connection closed by 147.75.109.163 port 37294 Sep 16 04:41:31.173378 sshd-session[6113]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:31.186964 systemd-logind[1982]: Session 18 logged out. Waiting for processes to exit. Sep 16 04:41:31.189539 systemd[1]: sshd@17-172.31.31.59:22-147.75.109.163:37294.service: Deactivated successfully. Sep 16 04:41:31.196986 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 04:41:31.224554 systemd-logind[1982]: Removed session 18. Sep 16 04:41:31.228747 systemd[1]: Started sshd@18-172.31.31.59:22-147.75.109.163:60930.service - OpenSSH per-connection server daemon (147.75.109.163:60930). Sep 16 04:41:31.466459 sshd[6151]: Accepted publickey for core from 147.75.109.163 port 60930 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:41:31.470857 sshd-session[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:31.483123 systemd-logind[1982]: New session 19 of user core. Sep 16 04:41:31.490158 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 04:41:32.165661 sshd[6157]: Connection closed by 147.75.109.163 port 60930 Sep 16 04:41:32.167389 sshd-session[6151]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:32.178077 systemd-logind[1982]: Session 19 logged out. Waiting for processes to exit. Sep 16 04:41:32.179921 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 04:41:32.181658 systemd[1]: sshd@18-172.31.31.59:22-147.75.109.163:60930.service: Deactivated successfully. Sep 16 04:41:32.215988 systemd-logind[1982]: Removed session 19. Sep 16 04:41:32.217045 systemd[1]: Started sshd@19-172.31.31.59:22-147.75.109.163:60936.service - OpenSSH per-connection server daemon (147.75.109.163:60936). Sep 16 04:41:32.425255 sshd[6167]: Accepted publickey for core from 147.75.109.163 port 60936 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:41:32.428804 sshd-session[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:32.444945 systemd-logind[1982]: New session 20 of user core. Sep 16 04:41:32.447937 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 04:41:32.781216 sshd[6170]: Connection closed by 147.75.109.163 port 60936 Sep 16 04:41:32.783021 sshd-session[6167]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:32.796510 systemd[1]: sshd@19-172.31.31.59:22-147.75.109.163:60936.service: Deactivated successfully. Sep 16 04:41:32.805811 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 04:41:32.811854 systemd-logind[1982]: Session 20 logged out. Waiting for processes to exit. Sep 16 04:41:32.816549 systemd-logind[1982]: Removed session 20. Sep 16 04:41:37.824204 systemd[1]: Started sshd@20-172.31.31.59:22-147.75.109.163:60942.service - OpenSSH per-connection server daemon (147.75.109.163:60942). Sep 16 04:41:38.040510 sshd[6184]: Accepted publickey for core from 147.75.109.163 port 60942 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:41:38.043003 sshd-session[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:38.051195 systemd-logind[1982]: New session 21 of user core. Sep 16 04:41:38.059941 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 04:41:38.341936 sshd[6188]: Connection closed by 147.75.109.163 port 60942 Sep 16 04:41:38.343032 sshd-session[6184]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:38.352069 systemd[1]: sshd@20-172.31.31.59:22-147.75.109.163:60942.service: Deactivated successfully. Sep 16 04:41:38.358158 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 04:41:38.362761 systemd-logind[1982]: Session 21 logged out. Waiting for processes to exit. Sep 16 04:41:38.365433 systemd-logind[1982]: Removed session 21. Sep 16 04:41:40.575931 kubelet[3330]: I0916 04:41:40.575545 3330 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:41:40.808432 containerd[2009]: time="2025-09-16T04:41:40.808370310Z" level=info msg="TaskExit event in podsandbox handler container_id:\"531bd4a65cc8cd59378d78fd073047cd201ae70da4f062e3156829ded6a6ae85\" id:\"6ed3f8622b4cd739fd8e0d8bf4810d0445ef64e895627a86e2735fee7246ade4\" pid:6221 exited_at:{seconds:1757997700 nanos:807789414}" Sep 16 04:41:43.379866 systemd[1]: Started sshd@21-172.31.31.59:22-147.75.109.163:53924.service - OpenSSH per-connection server daemon (147.75.109.163:53924). Sep 16 04:41:43.590118 sshd[6234]: Accepted publickey for core from 147.75.109.163 port 53924 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:41:43.594004 sshd-session[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:43.612755 systemd-logind[1982]: New session 22 of user core. Sep 16 04:41:43.619926 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 04:41:43.965880 sshd[6237]: Connection closed by 147.75.109.163 port 53924 Sep 16 04:41:43.966365 sshd-session[6234]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:43.976478 systemd-logind[1982]: Session 22 logged out. Waiting for processes to exit. Sep 16 04:41:43.978959 systemd[1]: sshd@21-172.31.31.59:22-147.75.109.163:53924.service: Deactivated successfully. Sep 16 04:41:43.989492 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 04:41:43.996676 systemd-logind[1982]: Removed session 22. Sep 16 04:41:47.815155 containerd[2009]: time="2025-09-16T04:41:47.814500204Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5983181c580a48e7a5ae7cc47a7c441cf35d2306174cc7bbb42a96b41a619aab\" id:\"b99908c3c64b777aebc78b1ead931b0e5125c03e4d03714ef305a4f5046c21c5\" pid:6260 exited_at:{seconds:1757997707 nanos:814027620}" Sep 16 04:41:49.007111 systemd[1]: Started sshd@22-172.31.31.59:22-147.75.109.163:53934.service - OpenSSH per-connection server daemon (147.75.109.163:53934). Sep 16 04:41:49.219211 sshd[6269]: Accepted publickey for core from 147.75.109.163 port 53934 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:41:49.223053 sshd-session[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:49.236836 systemd-logind[1982]: New session 23 of user core. Sep 16 04:41:49.247740 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 04:41:49.554682 sshd[6272]: Connection closed by 147.75.109.163 port 53934 Sep 16 04:41:49.552915 sshd-session[6269]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:49.563880 systemd[1]: sshd@22-172.31.31.59:22-147.75.109.163:53934.service: Deactivated successfully. Sep 16 04:41:49.570390 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 04:41:49.577811 systemd-logind[1982]: Session 23 logged out. Waiting for processes to exit. Sep 16 04:41:49.583752 systemd-logind[1982]: Removed session 23. Sep 16 04:41:51.632413 containerd[2009]: time="2025-09-16T04:41:51.632356431Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fe0c3d289715b04f84c366060a60aa57bc1dd53cd8d63f525b8c8283ac8563c\" id:\"f541dd0e25f17a536902085d45e5abec1e759eeb2a396470788b94c8b30ba3a4\" pid:6294 exited_at:{seconds:1757997711 nanos:629464395}" Sep 16 04:41:54.593241 systemd[1]: Started sshd@23-172.31.31.59:22-147.75.109.163:49728.service - OpenSSH per-connection server daemon (147.75.109.163:49728). Sep 16 04:41:54.809355 sshd[6307]: Accepted publickey for core from 147.75.109.163 port 49728 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:41:54.812203 sshd-session[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:54.824851 systemd-logind[1982]: New session 24 of user core. Sep 16 04:41:54.831304 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 16 04:41:55.197205 sshd[6310]: Connection closed by 147.75.109.163 port 49728 Sep 16 04:41:55.197086 sshd-session[6307]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:55.208571 systemd[1]: sshd@23-172.31.31.59:22-147.75.109.163:49728.service: Deactivated successfully. Sep 16 04:41:55.219518 systemd[1]: session-24.scope: Deactivated successfully. Sep 16 04:41:55.224958 systemd-logind[1982]: Session 24 logged out. Waiting for processes to exit. Sep 16 04:41:55.230520 systemd-logind[1982]: Removed session 24. Sep 16 04:41:59.502625 containerd[2009]: time="2025-09-16T04:41:59.502549834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"531bd4a65cc8cd59378d78fd073047cd201ae70da4f062e3156829ded6a6ae85\" id:\"aaf75cbcdf41f645b58ede431fefc44cdf7bae567a37e2df717979002e575123\" pid:6336 exited_at:{seconds:1757997719 nanos:501597202}" Sep 16 04:42:00.239621 systemd[1]: Started sshd@24-172.31.31.59:22-147.75.109.163:48294.service - OpenSSH per-connection server daemon (147.75.109.163:48294). Sep 16 04:42:00.453692 sshd[6348]: Accepted publickey for core from 147.75.109.163 port 48294 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:42:00.457962 sshd-session[6348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:42:00.472446 systemd-logind[1982]: New session 25 of user core. Sep 16 04:42:00.480187 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 16 04:42:00.772715 sshd[6351]: Connection closed by 147.75.109.163 port 48294 Sep 16 04:42:00.773937 sshd-session[6348]: pam_unix(sshd:session): session closed for user core Sep 16 04:42:00.783566 systemd[1]: sshd@24-172.31.31.59:22-147.75.109.163:48294.service: Deactivated successfully. Sep 16 04:42:00.791094 systemd[1]: session-25.scope: Deactivated successfully. Sep 16 04:42:00.795022 systemd-logind[1982]: Session 25 logged out. Waiting for processes to exit. Sep 16 04:42:00.800533 systemd-logind[1982]: Removed session 25. Sep 16 04:42:05.821061 systemd[1]: Started sshd@25-172.31.31.59:22-147.75.109.163:48296.service - OpenSSH per-connection server daemon (147.75.109.163:48296). Sep 16 04:42:06.032327 sshd[6365]: Accepted publickey for core from 147.75.109.163 port 48296 ssh2: RSA SHA256:Mbxc1OONLpKvl/xXfVcYZp4DH9DY1kjuiyJkLYJ329I Sep 16 04:42:06.034976 sshd-session[6365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:42:06.049251 systemd-logind[1982]: New session 26 of user core. Sep 16 04:42:06.056077 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 16 04:42:06.348325 sshd[6368]: Connection closed by 147.75.109.163 port 48296 Sep 16 04:42:06.349499 sshd-session[6365]: pam_unix(sshd:session): session closed for user core Sep 16 04:42:06.357411 systemd[1]: sshd@25-172.31.31.59:22-147.75.109.163:48296.service: Deactivated successfully. Sep 16 04:42:06.363437 systemd[1]: session-26.scope: Deactivated successfully. Sep 16 04:42:06.370823 systemd-logind[1982]: Session 26 logged out. Waiting for processes to exit. Sep 16 04:42:06.373490 systemd-logind[1982]: Removed session 26. Sep 16 04:42:10.865710 containerd[2009]: time="2025-09-16T04:42:10.865622771Z" level=info msg="TaskExit event in podsandbox handler container_id:\"531bd4a65cc8cd59378d78fd073047cd201ae70da4f062e3156829ded6a6ae85\" id:\"568eb90ac853a0f27385dd89302af7908e383b85f9666921f59cc0f14f5003fb\" pid:6391 exited_at:{seconds:1757997730 nanos:865190339}" Sep 16 04:42:17.803582 containerd[2009]: time="2025-09-16T04:42:17.803448005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5983181c580a48e7a5ae7cc47a7c441cf35d2306174cc7bbb42a96b41a619aab\" id:\"ae9bb087f64a7a284880cf03f2fc43d05c33c842d29bbd739bb0dcac58796ce4\" pid:6414 exited_at:{seconds:1757997737 nanos:802290221}" Sep 16 04:42:21.801683 containerd[2009]: time="2025-09-16T04:42:21.800601693Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fe0c3d289715b04f84c366060a60aa57bc1dd53cd8d63f525b8c8283ac8563c\" id:\"776e24892eb6b222e2763ecdda4d1166eaa7891101c36c5d4417603971c9bd22\" pid:6447 exit_status:1 exited_at:{seconds:1757997741 nanos:800004789}" Sep 16 04:42:30.637835 containerd[2009]: time="2025-09-16T04:42:30.637758761Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5983181c580a48e7a5ae7cc47a7c441cf35d2306174cc7bbb42a96b41a619aab\" id:\"d3169a6a86a24094c2ec77cd2ac2b8a7bbeda0c6d0e32900bc6a9e44bf932728\" pid:6472 exited_at:{seconds:1757997750 nanos:636560393}" Sep 16 04:42:40.935877 containerd[2009]: time="2025-09-16T04:42:40.935809504Z" level=info msg="TaskExit event in podsandbox handler container_id:\"531bd4a65cc8cd59378d78fd073047cd201ae70da4f062e3156829ded6a6ae85\" id:\"e655cc5423a8be1abbc3c7c3f3163e87591a0c79b066c2e8d7efc87cd76992cc\" pid:6517 exited_at:{seconds:1757997760 nanos:934842076}" Sep 16 04:42:47.786324 containerd[2009]: time="2025-09-16T04:42:47.786248470Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5983181c580a48e7a5ae7cc47a7c441cf35d2306174cc7bbb42a96b41a619aab\" id:\"a5fe7ad7437a98f78fd685cea27168db17916c883015761fa56056d2c1a920da\" pid:6539 exited_at:{seconds:1757997767 nanos:785855014}" Sep 16 04:42:51.526588 containerd[2009]: time="2025-09-16T04:42:51.526499701Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fe0c3d289715b04f84c366060a60aa57bc1dd53cd8d63f525b8c8283ac8563c\" id:\"deebf0affa6a0e8ff77c3e527d5d8e53cb99fc2ee585e41ee1d23f5b9e83121d\" pid:6562 exit_status:1 exited_at:{seconds:1757997771 nanos:526080781}" Sep 16 04:42:53.094402 systemd[1]: cri-containerd-9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783.scope: Deactivated successfully. Sep 16 04:42:53.095018 systemd[1]: cri-containerd-9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783.scope: Consumed 31.376s CPU time, 100.8M memory peak, 816K read from disk. Sep 16 04:42:53.103434 containerd[2009]: time="2025-09-16T04:42:53.103184641Z" level=info msg="received exit event container_id:\"9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783\" id:\"9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783\" pid:3831 exit_status:1 exited_at:{seconds:1757997773 nanos:102205825}" Sep 16 04:42:53.104986 containerd[2009]: time="2025-09-16T04:42:53.104708353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783\" id:\"9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783\" pid:3831 exit_status:1 exited_at:{seconds:1757997773 nanos:102205825}" Sep 16 04:42:53.144114 systemd[1]: cri-containerd-a56f5a44551c4cbc40928c16eee1b34c182fd1642d0aceb42f686ef7758806c9.scope: Deactivated successfully. Sep 16 04:42:53.145311 systemd[1]: cri-containerd-a56f5a44551c4cbc40928c16eee1b34c182fd1642d0aceb42f686ef7758806c9.scope: Consumed 5.323s CPU time, 66M memory peak, 64K read from disk. Sep 16 04:42:53.162864 containerd[2009]: time="2025-09-16T04:42:53.162775789Z" level=info msg="received exit event container_id:\"a56f5a44551c4cbc40928c16eee1b34c182fd1642d0aceb42f686ef7758806c9\" id:\"a56f5a44551c4cbc40928c16eee1b34c182fd1642d0aceb42f686ef7758806c9\" pid:3172 exit_status:1 exited_at:{seconds:1757997773 nanos:162287161}" Sep 16 04:42:53.163226 containerd[2009]: time="2025-09-16T04:42:53.163175413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a56f5a44551c4cbc40928c16eee1b34c182fd1642d0aceb42f686ef7758806c9\" id:\"a56f5a44551c4cbc40928c16eee1b34c182fd1642d0aceb42f686ef7758806c9\" pid:3172 exit_status:1 exited_at:{seconds:1757997773 nanos:162287161}" Sep 16 04:42:53.226198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783-rootfs.mount: Deactivated successfully. Sep 16 04:42:53.262593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a56f5a44551c4cbc40928c16eee1b34c182fd1642d0aceb42f686ef7758806c9-rootfs.mount: Deactivated successfully. Sep 16 04:42:54.117059 kubelet[3330]: I0916 04:42:54.117022 3330 scope.go:117] "RemoveContainer" containerID="a56f5a44551c4cbc40928c16eee1b34c182fd1642d0aceb42f686ef7758806c9" Sep 16 04:42:54.128207 containerd[2009]: time="2025-09-16T04:42:54.127960034Z" level=info msg="CreateContainer within sandbox \"fe7123f44c6c1253e12f9a6f6057c44a6af34cbe74e8f63d79a80922eef9e44c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 16 04:42:54.130129 kubelet[3330]: I0916 04:42:54.129744 3330 scope.go:117] "RemoveContainer" containerID="9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783" Sep 16 04:42:54.136209 containerd[2009]: time="2025-09-16T04:42:54.136162874Z" level=info msg="CreateContainer within sandbox \"2693ff28a22312e26025fe51a05d68d2baa4377c19b15b866e9cf3026f95d2bb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 16 04:42:54.157701 containerd[2009]: time="2025-09-16T04:42:54.155243726Z" level=info msg="Container 1f4c0f5a4278cdeae710fcd575852d5f4e72ef841b6c1b913b0e01c3c9734dee: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:54.173979 containerd[2009]: time="2025-09-16T04:42:54.173911022Z" level=info msg="Container 87cff249db1402657f50ebc13243887737b1de392c83c92fb17619371bb33cd7: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:54.184765 containerd[2009]: time="2025-09-16T04:42:54.184700402Z" level=info msg="CreateContainer within sandbox \"2693ff28a22312e26025fe51a05d68d2baa4377c19b15b866e9cf3026f95d2bb\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1f4c0f5a4278cdeae710fcd575852d5f4e72ef841b6c1b913b0e01c3c9734dee\"" Sep 16 04:42:54.185340 containerd[2009]: time="2025-09-16T04:42:54.185306798Z" level=info msg="StartContainer for \"1f4c0f5a4278cdeae710fcd575852d5f4e72ef841b6c1b913b0e01c3c9734dee\"" Sep 16 04:42:54.189055 containerd[2009]: time="2025-09-16T04:42:54.188918270Z" level=info msg="connecting to shim 1f4c0f5a4278cdeae710fcd575852d5f4e72ef841b6c1b913b0e01c3c9734dee" address="unix:///run/containerd/s/12a347be6869ceb08a5fad90f0cc2133e564f67f89625723baa9022c6cce0f01" protocol=ttrpc version=3 Sep 16 04:42:54.201603 containerd[2009]: time="2025-09-16T04:42:54.201331586Z" level=info msg="CreateContainer within sandbox \"fe7123f44c6c1253e12f9a6f6057c44a6af34cbe74e8f63d79a80922eef9e44c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"87cff249db1402657f50ebc13243887737b1de392c83c92fb17619371bb33cd7\"" Sep 16 04:42:54.203014 containerd[2009]: time="2025-09-16T04:42:54.202949486Z" level=info msg="StartContainer for \"87cff249db1402657f50ebc13243887737b1de392c83c92fb17619371bb33cd7\"" Sep 16 04:42:54.205320 containerd[2009]: time="2025-09-16T04:42:54.205203938Z" level=info msg="connecting to shim 87cff249db1402657f50ebc13243887737b1de392c83c92fb17619371bb33cd7" address="unix:///run/containerd/s/3a70ac2c41d1d8f52c9c1f0b030286871006d5686549b1d2e3ae422eaee36622" protocol=ttrpc version=3 Sep 16 04:42:54.248056 systemd[1]: Started cri-containerd-1f4c0f5a4278cdeae710fcd575852d5f4e72ef841b6c1b913b0e01c3c9734dee.scope - libcontainer container 1f4c0f5a4278cdeae710fcd575852d5f4e72ef841b6c1b913b0e01c3c9734dee. Sep 16 04:42:54.264235 systemd[1]: Started cri-containerd-87cff249db1402657f50ebc13243887737b1de392c83c92fb17619371bb33cd7.scope - libcontainer container 87cff249db1402657f50ebc13243887737b1de392c83c92fb17619371bb33cd7. Sep 16 04:42:54.358198 containerd[2009]: time="2025-09-16T04:42:54.358137723Z" level=info msg="StartContainer for \"1f4c0f5a4278cdeae710fcd575852d5f4e72ef841b6c1b913b0e01c3c9734dee\" returns successfully" Sep 16 04:42:54.385368 containerd[2009]: time="2025-09-16T04:42:54.385210527Z" level=info msg="StartContainer for \"87cff249db1402657f50ebc13243887737b1de392c83c92fb17619371bb33cd7\" returns successfully" Sep 16 04:42:58.365296 systemd[1]: cri-containerd-31af12b3a9a618e7ae12a39268d56c3b80e45093d1f035efd4d1462275bf7590.scope: Deactivated successfully. Sep 16 04:42:58.366720 systemd[1]: cri-containerd-31af12b3a9a618e7ae12a39268d56c3b80e45093d1f035efd4d1462275bf7590.scope: Consumed 5.695s CPU time, 22.5M memory peak, 200K read from disk. Sep 16 04:42:58.373241 containerd[2009]: time="2025-09-16T04:42:58.373143331Z" level=info msg="TaskExit event in podsandbox handler container_id:\"31af12b3a9a618e7ae12a39268d56c3b80e45093d1f035efd4d1462275bf7590\" id:\"31af12b3a9a618e7ae12a39268d56c3b80e45093d1f035efd4d1462275bf7590\" pid:3179 exit_status:1 exited_at:{seconds:1757997778 nanos:372269899}" Sep 16 04:42:58.373241 containerd[2009]: time="2025-09-16T04:42:58.373147087Z" level=info msg="received exit event container_id:\"31af12b3a9a618e7ae12a39268d56c3b80e45093d1f035efd4d1462275bf7590\" id:\"31af12b3a9a618e7ae12a39268d56c3b80e45093d1f035efd4d1462275bf7590\" pid:3179 exit_status:1 exited_at:{seconds:1757997778 nanos:372269899}" Sep 16 04:42:58.422484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31af12b3a9a618e7ae12a39268d56c3b80e45093d1f035efd4d1462275bf7590-rootfs.mount: Deactivated successfully. Sep 16 04:42:59.169671 kubelet[3330]: I0916 04:42:59.168895 3330 scope.go:117] "RemoveContainer" containerID="31af12b3a9a618e7ae12a39268d56c3b80e45093d1f035efd4d1462275bf7590" Sep 16 04:42:59.175667 containerd[2009]: time="2025-09-16T04:42:59.175567591Z" level=info msg="CreateContainer within sandbox \"5d3ec0104e553dc3af2c8ee58c8f8fed8ddd5aef8c4869496797f92851a17c7f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 16 04:42:59.196990 containerd[2009]: time="2025-09-16T04:42:59.196921639Z" level=info msg="Container ff1626a9d7af5c4f891ff7b197f41011953adfbdf61bab3328a6cbe80da29e1e: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:59.210705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4086821655.mount: Deactivated successfully. Sep 16 04:42:59.218347 containerd[2009]: time="2025-09-16T04:42:59.218282719Z" level=info msg="CreateContainer within sandbox \"5d3ec0104e553dc3af2c8ee58c8f8fed8ddd5aef8c4869496797f92851a17c7f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ff1626a9d7af5c4f891ff7b197f41011953adfbdf61bab3328a6cbe80da29e1e\"" Sep 16 04:42:59.219685 containerd[2009]: time="2025-09-16T04:42:59.219395251Z" level=info msg="StartContainer for \"ff1626a9d7af5c4f891ff7b197f41011953adfbdf61bab3328a6cbe80da29e1e\"" Sep 16 04:42:59.222373 containerd[2009]: time="2025-09-16T04:42:59.222312595Z" level=info msg="connecting to shim ff1626a9d7af5c4f891ff7b197f41011953adfbdf61bab3328a6cbe80da29e1e" address="unix:///run/containerd/s/50e3e796e651dc3b13415594424c3f119f157f7751239b4b8000eb1583810909" protocol=ttrpc version=3 Sep 16 04:42:59.268130 systemd[1]: Started cri-containerd-ff1626a9d7af5c4f891ff7b197f41011953adfbdf61bab3328a6cbe80da29e1e.scope - libcontainer container ff1626a9d7af5c4f891ff7b197f41011953adfbdf61bab3328a6cbe80da29e1e. Sep 16 04:42:59.378951 containerd[2009]: time="2025-09-16T04:42:59.378605996Z" level=info msg="StartContainer for \"ff1626a9d7af5c4f891ff7b197f41011953adfbdf61bab3328a6cbe80da29e1e\" returns successfully" Sep 16 04:42:59.458233 containerd[2009]: time="2025-09-16T04:42:59.457845680Z" level=info msg="TaskExit event in podsandbox handler container_id:\"531bd4a65cc8cd59378d78fd073047cd201ae70da4f062e3156829ded6a6ae85\" id:\"a1f8160162814b075626f2d02230c03c3478e7e9c746dd888a34488de3573f0f\" pid:6703 exited_at:{seconds:1757997779 nanos:457437404}" Sep 16 04:43:01.024869 kubelet[3330]: E0916 04:43:01.024792 3330 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-59?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 16 04:43:05.849679 systemd[1]: cri-containerd-1f4c0f5a4278cdeae710fcd575852d5f4e72ef841b6c1b913b0e01c3c9734dee.scope: Deactivated successfully. Sep 16 04:43:05.854455 containerd[2009]: time="2025-09-16T04:43:05.853738648Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f4c0f5a4278cdeae710fcd575852d5f4e72ef841b6c1b913b0e01c3c9734dee\" id:\"1f4c0f5a4278cdeae710fcd575852d5f4e72ef841b6c1b913b0e01c3c9734dee\" pid:6620 exit_status:1 exited_at:{seconds:1757997785 nanos:853215064}" Sep 16 04:43:05.854455 containerd[2009]: time="2025-09-16T04:43:05.853893496Z" level=info msg="received exit event container_id:\"1f4c0f5a4278cdeae710fcd575852d5f4e72ef841b6c1b913b0e01c3c9734dee\" id:\"1f4c0f5a4278cdeae710fcd575852d5f4e72ef841b6c1b913b0e01c3c9734dee\" pid:6620 exit_status:1 exited_at:{seconds:1757997785 nanos:853215064}" Sep 16 04:43:05.895836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f4c0f5a4278cdeae710fcd575852d5f4e72ef841b6c1b913b0e01c3c9734dee-rootfs.mount: Deactivated successfully. Sep 16 04:43:06.201464 kubelet[3330]: I0916 04:43:06.200527 3330 scope.go:117] "RemoveContainer" containerID="9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783" Sep 16 04:43:06.201464 kubelet[3330]: I0916 04:43:06.201059 3330 scope.go:117] "RemoveContainer" containerID="1f4c0f5a4278cdeae710fcd575852d5f4e72ef841b6c1b913b0e01c3c9734dee" Sep 16 04:43:06.201464 kubelet[3330]: E0916 04:43:06.201282 3330 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-755d956888-4dmmd_tigera-operator(5208b3f8-ecab-4c7f-9ab3-5d83f5d4442b)\"" pod="tigera-operator/tigera-operator-755d956888-4dmmd" podUID="5208b3f8-ecab-4c7f-9ab3-5d83f5d4442b" Sep 16 04:43:06.205079 containerd[2009]: time="2025-09-16T04:43:06.205009898Z" level=info msg="RemoveContainer for \"9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783\"" Sep 16 04:43:06.214811 containerd[2009]: time="2025-09-16T04:43:06.214701914Z" level=info msg="RemoveContainer for \"9a703896e460264d2e003818d3ca0c610a893681e73f238cae8ca48ac7bcc783\" returns successfully" Sep 16 04:43:10.797574 containerd[2009]: time="2025-09-16T04:43:10.797494989Z" level=info msg="TaskExit event in podsandbox handler container_id:\"531bd4a65cc8cd59378d78fd073047cd201ae70da4f062e3156829ded6a6ae85\" id:\"34739eea332d0fd0fa0f4f82fdd261db62dc0f8717249a2c15122aff86d9e3b4\" pid:6753 exited_at:{seconds:1757997790 nanos:797011245}" Sep 16 04:43:11.025402 kubelet[3330]: E0916 04:43:11.025163 3330 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-59?timeout=10s\": context deadline exceeded"