Jul 6 23:27:06.110342 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 6 23:27:06.110386 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:57:11 -00 2025 Jul 6 23:27:06.110410 kernel: KASLR disabled due to lack of seed Jul 6 23:27:06.110859 kernel: efi: EFI v2.7 by EDK II Jul 6 23:27:06.110877 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Jul 6 23:27:06.110893 kernel: secureboot: Secure boot disabled Jul 6 23:27:06.110910 kernel: ACPI: Early table checksum verification disabled Jul 6 23:27:06.110925 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 6 23:27:06.110940 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 6 23:27:06.110955 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 6 23:27:06.110970 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 6 23:27:06.110992 kernel: ACPI: FACS 0x0000000078630000 000040 Jul 6 23:27:06.111007 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 6 23:27:06.111022 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 6 23:27:06.111039 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 6 23:27:06.111055 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 6 23:27:06.111075 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 6 23:27:06.111091 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 6 23:27:06.111107 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 6 23:27:06.111123 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 6 23:27:06.111139 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 6 23:27:06.111156 kernel: printk: legacy bootconsole [uart0] enabled Jul 6 23:27:06.111171 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 6 23:27:06.111187 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 6 23:27:06.111204 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Jul 6 23:27:06.111219 kernel: Zone ranges: Jul 6 23:27:06.111235 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 6 23:27:06.111256 kernel: DMA32 empty Jul 6 23:27:06.111272 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 6 23:27:06.111288 kernel: Device empty Jul 6 23:27:06.111310 kernel: Movable zone start for each node Jul 6 23:27:06.111353 kernel: Early memory node ranges Jul 6 23:27:06.111392 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 6 23:27:06.111482 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 6 23:27:06.111517 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 6 23:27:06.111537 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 6 23:27:06.111554 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 6 23:27:06.111570 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 6 23:27:06.111589 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 6 23:27:06.111611 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 6 23:27:06.111634 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 6 23:27:06.111651 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 6 23:27:06.111667 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jul 6 23:27:06.111683 kernel: psci: probing for conduit method from ACPI. Jul 6 23:27:06.111704 kernel: psci: PSCIv1.0 detected in firmware. Jul 6 23:27:06.111720 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:27:06.111736 kernel: psci: Trusted OS migration not required Jul 6 23:27:06.111752 kernel: psci: SMC Calling Convention v1.1 Jul 6 23:27:06.111769 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 6 23:27:06.111786 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 6 23:27:06.111803 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 6 23:27:06.111820 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 6 23:27:06.111836 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:27:06.111852 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:27:06.111869 kernel: CPU features: detected: Spectre-v2 Jul 6 23:27:06.111890 kernel: CPU features: detected: Spectre-v3a Jul 6 23:27:06.111906 kernel: CPU features: detected: Spectre-BHB Jul 6 23:27:06.111923 kernel: CPU features: detected: ARM erratum 1742098 Jul 6 23:27:06.111939 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 6 23:27:06.111955 kernel: alternatives: applying boot alternatives Jul 6 23:27:06.111974 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:27:06.111992 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:27:06.112009 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:27:06.112025 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:27:06.112041 kernel: Fallback order for Node 0: 0 Jul 6 23:27:06.112061 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jul 6 23:27:06.112078 kernel: Policy zone: Normal Jul 6 23:27:06.112094 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:27:06.112110 kernel: software IO TLB: area num 2. Jul 6 23:27:06.112127 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Jul 6 23:27:06.112143 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:27:06.112159 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:27:06.112177 kernel: rcu: RCU event tracing is enabled. Jul 6 23:27:06.112194 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:27:06.112211 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:27:06.112227 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:27:06.112244 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:27:06.112264 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:27:06.112281 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:27:06.112297 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:27:06.112314 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:27:06.112330 kernel: GICv3: 96 SPIs implemented Jul 6 23:27:06.112346 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:27:06.112362 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:27:06.112378 kernel: GICv3: GICv3 features: 16 PPIs Jul 6 23:27:06.112394 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 6 23:27:06.112411 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 6 23:27:06.114460 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 6 23:27:06.114480 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:27:06.114518 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jul 6 23:27:06.114535 kernel: GICv3: using LPI property table @0x0000000400110000 Jul 6 23:27:06.114551 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 6 23:27:06.114568 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jul 6 23:27:06.114585 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:27:06.114601 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 6 23:27:06.114618 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 6 23:27:06.114635 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 6 23:27:06.114651 kernel: Console: colour dummy device 80x25 Jul 6 23:27:06.114668 kernel: printk: legacy console [tty1] enabled Jul 6 23:27:06.114685 kernel: ACPI: Core revision 20240827 Jul 6 23:27:06.114707 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 6 23:27:06.114724 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:27:06.114741 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 6 23:27:06.114757 kernel: landlock: Up and running. Jul 6 23:27:06.114774 kernel: SELinux: Initializing. Jul 6 23:27:06.114790 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:27:06.114807 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:27:06.114823 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:27:06.114840 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:27:06.114861 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 6 23:27:06.114878 kernel: Remapping and enabling EFI services. Jul 6 23:27:06.114894 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:27:06.114911 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:27:06.114928 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 6 23:27:06.114944 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jul 6 23:27:06.114961 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 6 23:27:06.114977 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:27:06.114994 kernel: SMP: Total of 2 processors activated. Jul 6 23:27:06.115024 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:27:06.115041 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:27:06.115063 kernel: CPU features: detected: 32-bit EL1 Support Jul 6 23:27:06.115080 kernel: CPU features: detected: CRC32 instructions Jul 6 23:27:06.115097 kernel: alternatives: applying system-wide alternatives Jul 6 23:27:06.115115 kernel: Memory: 3796516K/4030464K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 212600K reserved, 16384K cma-reserved) Jul 6 23:27:06.115133 kernel: devtmpfs: initialized Jul 6 23:27:06.115155 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:27:06.115173 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:27:06.115190 kernel: 16912 pages in range for non-PLT usage Jul 6 23:27:06.115208 kernel: 508432 pages in range for PLT usage Jul 6 23:27:06.115225 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:27:06.115243 kernel: SMBIOS 3.0.0 present. Jul 6 23:27:06.115260 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 6 23:27:06.115277 kernel: DMI: Memory slots populated: 0/0 Jul 6 23:27:06.115294 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:27:06.115316 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:27:06.115334 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:27:06.115352 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:27:06.115369 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:27:06.115387 kernel: audit: type=2000 audit(0.227:1): state=initialized audit_enabled=0 res=1 Jul 6 23:27:06.115404 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:27:06.115456 kernel: cpuidle: using governor menu Jul 6 23:27:06.115477 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:27:06.115495 kernel: ASID allocator initialised with 65536 entries Jul 6 23:27:06.115518 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:27:06.115536 kernel: Serial: AMBA PL011 UART driver Jul 6 23:27:06.115553 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:27:06.115571 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:27:06.115588 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:27:06.115605 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:27:06.115623 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:27:06.115640 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:27:06.115657 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:27:06.115679 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:27:06.115696 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:27:06.115715 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:27:06.115733 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:27:06.115750 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:27:06.115768 kernel: ACPI: Interpreter enabled Jul 6 23:27:06.115785 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:27:06.115803 kernel: ACPI: MCFG table detected, 1 entries Jul 6 23:27:06.115820 kernel: ACPI: CPU0 has been hot-added Jul 6 23:27:06.115842 kernel: ACPI: CPU1 has been hot-added Jul 6 23:27:06.115859 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 6 23:27:06.116153 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:27:06.116344 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 6 23:27:06.116564 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 6 23:27:06.116824 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 6 23:27:06.117015 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 6 23:27:06.117047 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 6 23:27:06.117065 kernel: acpiphp: Slot [1] registered Jul 6 23:27:06.117100 kernel: acpiphp: Slot [2] registered Jul 6 23:27:06.117120 kernel: acpiphp: Slot [3] registered Jul 6 23:27:06.117138 kernel: acpiphp: Slot [4] registered Jul 6 23:27:06.117156 kernel: acpiphp: Slot [5] registered Jul 6 23:27:06.117173 kernel: acpiphp: Slot [6] registered Jul 6 23:27:06.117190 kernel: acpiphp: Slot [7] registered Jul 6 23:27:06.117207 kernel: acpiphp: Slot [8] registered Jul 6 23:27:06.117224 kernel: acpiphp: Slot [9] registered Jul 6 23:27:06.117247 kernel: acpiphp: Slot [10] registered Jul 6 23:27:06.117264 kernel: acpiphp: Slot [11] registered Jul 6 23:27:06.117282 kernel: acpiphp: Slot [12] registered Jul 6 23:27:06.117299 kernel: acpiphp: Slot [13] registered Jul 6 23:27:06.117316 kernel: acpiphp: Slot [14] registered Jul 6 23:27:06.117333 kernel: acpiphp: Slot [15] registered Jul 6 23:27:06.117350 kernel: acpiphp: Slot [16] registered Jul 6 23:27:06.117368 kernel: acpiphp: Slot [17] registered Jul 6 23:27:06.117385 kernel: acpiphp: Slot [18] registered Jul 6 23:27:06.117406 kernel: acpiphp: Slot [19] registered Jul 6 23:27:06.118494 kernel: acpiphp: Slot [20] registered Jul 6 23:27:06.118519 kernel: acpiphp: Slot [21] registered Jul 6 23:27:06.118537 kernel: acpiphp: Slot [22] registered Jul 6 23:27:06.118554 kernel: acpiphp: Slot [23] registered Jul 6 23:27:06.118571 kernel: acpiphp: Slot [24] registered Jul 6 23:27:06.118589 kernel: acpiphp: Slot [25] registered Jul 6 23:27:06.118606 kernel: acpiphp: Slot [26] registered Jul 6 23:27:06.118623 kernel: acpiphp: Slot [27] registered Jul 6 23:27:06.118640 kernel: acpiphp: Slot [28] registered Jul 6 23:27:06.118978 kernel: acpiphp: Slot [29] registered Jul 6 23:27:06.119005 kernel: acpiphp: Slot [30] registered Jul 6 23:27:06.119023 kernel: acpiphp: Slot [31] registered Jul 6 23:27:06.119040 kernel: PCI host bridge to bus 0000:00 Jul 6 23:27:06.119279 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 6 23:27:06.121754 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 6 23:27:06.121960 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 6 23:27:06.122137 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 6 23:27:06.122353 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jul 6 23:27:06.122605 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jul 6 23:27:06.122806 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jul 6 23:27:06.123062 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jul 6 23:27:06.123258 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jul 6 23:27:06.128556 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:27:06.128842 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jul 6 23:27:06.129056 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jul 6 23:27:06.129306 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jul 6 23:27:06.130678 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jul 6 23:27:06.130906 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:27:06.131098 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Jul 6 23:27:06.131289 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Jul 6 23:27:06.133616 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Jul 6 23:27:06.133856 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Jul 6 23:27:06.134057 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Jul 6 23:27:06.134235 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 6 23:27:06.134405 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 6 23:27:06.136497 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 6 23:27:06.136538 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 6 23:27:06.136558 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 6 23:27:06.136577 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 6 23:27:06.136594 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 6 23:27:06.136612 kernel: iommu: Default domain type: Translated Jul 6 23:27:06.136630 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:27:06.136648 kernel: efivars: Registered efivars operations Jul 6 23:27:06.136665 kernel: vgaarb: loaded Jul 6 23:27:06.136682 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:27:06.136700 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:27:06.136722 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:27:06.136740 kernel: pnp: PnP ACPI init Jul 6 23:27:06.136951 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 6 23:27:06.136978 kernel: pnp: PnP ACPI: found 1 devices Jul 6 23:27:06.136996 kernel: NET: Registered PF_INET protocol family Jul 6 23:27:06.137014 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:27:06.137032 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:27:06.137049 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:27:06.137072 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:27:06.137114 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:27:06.137133 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:27:06.137151 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:27:06.137169 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:27:06.137187 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:27:06.137205 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:27:06.137223 kernel: kvm [1]: HYP mode not available Jul 6 23:27:06.137240 kernel: Initialise system trusted keyrings Jul 6 23:27:06.137264 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:27:06.137282 kernel: Key type asymmetric registered Jul 6 23:27:06.137300 kernel: Asymmetric key parser 'x509' registered Jul 6 23:27:06.137319 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 6 23:27:06.137337 kernel: io scheduler mq-deadline registered Jul 6 23:27:06.137355 kernel: io scheduler kyber registered Jul 6 23:27:06.137372 kernel: io scheduler bfq registered Jul 6 23:27:06.140695 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 6 23:27:06.140753 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 6 23:27:06.140773 kernel: ACPI: button: Power Button [PWRB] Jul 6 23:27:06.140792 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 6 23:27:06.140810 kernel: ACPI: button: Sleep Button [SLPB] Jul 6 23:27:06.140828 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:27:06.140847 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 6 23:27:06.141054 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 6 23:27:06.141100 kernel: printk: legacy console [ttyS0] disabled Jul 6 23:27:06.141121 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 6 23:27:06.141146 kernel: printk: legacy console [ttyS0] enabled Jul 6 23:27:06.141164 kernel: printk: legacy bootconsole [uart0] disabled Jul 6 23:27:06.141181 kernel: thunder_xcv, ver 1.0 Jul 6 23:27:06.141198 kernel: thunder_bgx, ver 1.0 Jul 6 23:27:06.141215 kernel: nicpf, ver 1.0 Jul 6 23:27:06.141232 kernel: nicvf, ver 1.0 Jul 6 23:27:06.141477 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:27:06.141666 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:27:05 UTC (1751844425) Jul 6 23:27:06.141699 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:27:06.141718 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jul 6 23:27:06.141736 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:27:06.141754 kernel: watchdog: NMI not fully supported Jul 6 23:27:06.141771 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:27:06.141789 kernel: Segment Routing with IPv6 Jul 6 23:27:06.141806 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:27:06.141824 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:27:06.141842 kernel: Key type dns_resolver registered Jul 6 23:27:06.141864 kernel: registered taskstats version 1 Jul 6 23:27:06.141882 kernel: Loading compiled-in X.509 certificates Jul 6 23:27:06.141900 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: f8c1d02496b1c3f2ac4a0c4b5b2a55d3dc0ca718' Jul 6 23:27:06.141917 kernel: Demotion targets for Node 0: null Jul 6 23:27:06.141934 kernel: Key type .fscrypt registered Jul 6 23:27:06.141951 kernel: Key type fscrypt-provisioning registered Jul 6 23:27:06.141968 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:27:06.141985 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:27:06.142002 kernel: ima: No architecture policies found Jul 6 23:27:06.142024 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:27:06.142042 kernel: clk: Disabling unused clocks Jul 6 23:27:06.142059 kernel: PM: genpd: Disabling unused power domains Jul 6 23:27:06.142076 kernel: Warning: unable to open an initial console. Jul 6 23:27:06.142094 kernel: Freeing unused kernel memory: 39488K Jul 6 23:27:06.142111 kernel: Run /init as init process Jul 6 23:27:06.142128 kernel: with arguments: Jul 6 23:27:06.142145 kernel: /init Jul 6 23:27:06.142162 kernel: with environment: Jul 6 23:27:06.142179 kernel: HOME=/ Jul 6 23:27:06.142201 kernel: TERM=linux Jul 6 23:27:06.142218 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:27:06.142237 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:27:06.142261 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:27:06.142281 systemd[1]: Detected virtualization amazon. Jul 6 23:27:06.142299 systemd[1]: Detected architecture arm64. Jul 6 23:27:06.142317 systemd[1]: Running in initrd. Jul 6 23:27:06.142340 systemd[1]: No hostname configured, using default hostname. Jul 6 23:27:06.142359 systemd[1]: Hostname set to . Jul 6 23:27:06.142378 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:27:06.142396 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:27:06.145090 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:27:06.145143 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:27:06.145166 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:27:06.145187 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:27:06.145216 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:27:06.145238 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:27:06.145259 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:27:06.145845 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:27:06.145871 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:27:06.145891 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:27:06.145910 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:27:06.145936 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:27:06.145955 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:27:06.145975 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:27:06.145994 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:27:06.146013 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:27:06.146033 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:27:06.146052 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:27:06.146071 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:27:06.146094 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:27:06.146113 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:27:06.146132 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:27:06.146151 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:27:06.146170 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:27:06.146189 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:27:06.146209 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 6 23:27:06.146228 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:27:06.146247 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:27:06.146270 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:27:06.146289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:06.146308 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:27:06.146328 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:27:06.146351 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:27:06.146371 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:27:06.146451 systemd-journald[256]: Collecting audit messages is disabled. Jul 6 23:27:06.146495 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:27:06.146520 kernel: Bridge firewalling registered Jul 6 23:27:06.146555 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:27:06.146576 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:27:06.146595 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:06.146615 systemd-journald[256]: Journal started Jul 6 23:27:06.146656 systemd-journald[256]: Runtime Journal (/run/log/journal/ec2601905f54a46064d9c3e1607937dd) is 8M, max 75.3M, 67.3M free. Jul 6 23:27:06.081693 systemd-modules-load[258]: Inserted module 'overlay' Jul 6 23:27:06.117826 systemd-modules-load[258]: Inserted module 'br_netfilter' Jul 6 23:27:06.164583 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:27:06.160610 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:27:06.171528 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:27:06.178973 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:27:06.194118 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:27:06.201310 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:27:06.224525 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:27:06.235000 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:27:06.244278 systemd-tmpfiles[283]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 6 23:27:06.249609 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:27:06.256868 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:27:06.270939 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:27:06.303863 dracut-cmdline[297]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:27:06.355824 systemd-resolved[299]: Positive Trust Anchors: Jul 6 23:27:06.355853 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:27:06.355917 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:27:06.458458 kernel: SCSI subsystem initialized Jul 6 23:27:06.465458 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:27:06.478484 kernel: iscsi: registered transport (tcp) Jul 6 23:27:06.499952 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:27:06.500035 kernel: QLogic iSCSI HBA Driver Jul 6 23:27:06.533596 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:27:06.559463 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:27:06.575908 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:27:06.646467 kernel: random: crng init done Jul 6 23:27:06.647245 systemd-resolved[299]: Defaulting to hostname 'linux'. Jul 6 23:27:06.650585 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:27:06.655754 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:27:06.681318 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:27:06.687933 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:27:06.782487 kernel: raid6: neonx8 gen() 6430 MB/s Jul 6 23:27:06.799466 kernel: raid6: neonx4 gen() 6404 MB/s Jul 6 23:27:06.816461 kernel: raid6: neonx2 gen() 5330 MB/s Jul 6 23:27:06.833464 kernel: raid6: neonx1 gen() 3916 MB/s Jul 6 23:27:06.850461 kernel: raid6: int64x8 gen() 3629 MB/s Jul 6 23:27:06.867462 kernel: raid6: int64x4 gen() 3679 MB/s Jul 6 23:27:06.884459 kernel: raid6: int64x2 gen() 3556 MB/s Jul 6 23:27:06.902475 kernel: raid6: int64x1 gen() 2764 MB/s Jul 6 23:27:06.902520 kernel: raid6: using algorithm neonx8 gen() 6430 MB/s Jul 6 23:27:06.921461 kernel: raid6: .... xor() 4754 MB/s, rmw enabled Jul 6 23:27:06.921511 kernel: raid6: using neon recovery algorithm Jul 6 23:27:06.930252 kernel: xor: measuring software checksum speed Jul 6 23:27:06.930305 kernel: 8regs : 12299 MB/sec Jul 6 23:27:06.932740 kernel: 32regs : 12013 MB/sec Jul 6 23:27:06.932784 kernel: arm64_neon : 9181 MB/sec Jul 6 23:27:06.932809 kernel: xor: using function: 8regs (12299 MB/sec) Jul 6 23:27:07.026480 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:27:07.038145 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:27:07.044638 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:27:07.095777 systemd-udevd[507]: Using default interface naming scheme 'v255'. Jul 6 23:27:07.107806 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:27:07.117974 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:27:07.165475 dracut-pre-trigger[515]: rd.md=0: removing MD RAID activation Jul 6 23:27:07.209780 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:27:07.214973 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:27:07.364257 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:27:07.374978 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:27:07.504999 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 6 23:27:07.505082 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 6 23:27:07.513645 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 6 23:27:07.513978 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 6 23:27:07.525464 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:c1:e4:f6:f5:57 Jul 6 23:27:07.535206 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 6 23:27:07.535268 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 6 23:27:07.548455 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 6 23:27:07.556675 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:27:07.556736 kernel: GPT:9289727 != 16777215 Jul 6 23:27:07.558473 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:27:07.560320 kernel: GPT:9289727 != 16777215 Jul 6 23:27:07.560380 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:27:07.562890 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:27:07.565339 (udev-worker)[559]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:27:07.590059 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:27:07.592579 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:07.597837 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:07.603995 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:07.614380 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:27:07.628469 kernel: nvme nvme0: using unchecked data buffer Jul 6 23:27:07.660179 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:07.763758 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 6 23:27:07.832656 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 6 23:27:07.836380 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:27:07.882075 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 6 23:27:07.902957 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 6 23:27:07.905701 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 6 23:27:07.915093 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:27:07.921414 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:27:07.924036 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:27:07.933622 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:27:07.940525 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:27:07.968604 disk-uuid[686]: Primary Header is updated. Jul 6 23:27:07.968604 disk-uuid[686]: Secondary Entries is updated. Jul 6 23:27:07.968604 disk-uuid[686]: Secondary Header is updated. Jul 6 23:27:07.979466 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:27:07.992819 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:27:08.999841 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:27:09.002653 disk-uuid[688]: The operation has completed successfully. Jul 6 23:27:09.175051 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:27:09.176514 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:27:09.261936 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:27:09.298698 sh[954]: Success Jul 6 23:27:09.328517 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:27:09.328591 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:27:09.330520 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 6 23:27:09.342488 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 6 23:27:09.446496 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:27:09.453100 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:27:09.483621 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:27:09.499470 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 6 23:27:09.502458 kernel: BTRFS: device fsid 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (977) Jul 6 23:27:09.506737 kernel: BTRFS info (device dm-0): first mount of filesystem 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d Jul 6 23:27:09.506786 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:27:09.507968 kernel: BTRFS info (device dm-0): using free-space-tree Jul 6 23:27:09.617764 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:27:09.622221 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:27:09.627035 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:27:09.632328 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:27:09.647261 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:27:09.691476 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1002) Jul 6 23:27:09.696941 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:27:09.697013 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:27:09.698471 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 6 23:27:09.713494 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:27:09.716817 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:27:09.723030 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:27:09.823741 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:27:09.833905 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:27:09.906326 systemd-networkd[1147]: lo: Link UP Jul 6 23:27:09.906339 systemd-networkd[1147]: lo: Gained carrier Jul 6 23:27:09.911127 systemd-networkd[1147]: Enumeration completed Jul 6 23:27:09.911290 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:27:09.913863 systemd[1]: Reached target network.target - Network. Jul 6 23:27:09.916191 systemd-networkd[1147]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:27:09.916199 systemd-networkd[1147]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:27:09.931307 systemd-networkd[1147]: eth0: Link UP Jul 6 23:27:09.931320 systemd-networkd[1147]: eth0: Gained carrier Jul 6 23:27:09.931341 systemd-networkd[1147]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:27:09.957506 systemd-networkd[1147]: eth0: DHCPv4 address 172.31.24.125/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 6 23:27:10.219242 ignition[1061]: Ignition 2.21.0 Jul 6 23:27:10.219284 ignition[1061]: Stage: fetch-offline Jul 6 23:27:10.220456 ignition[1061]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:10.220491 ignition[1061]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:10.222173 ignition[1061]: Ignition finished successfully Jul 6 23:27:10.232818 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:27:10.237860 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:27:10.287818 ignition[1159]: Ignition 2.21.0 Jul 6 23:27:10.288306 ignition[1159]: Stage: fetch Jul 6 23:27:10.288885 ignition[1159]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:10.288909 ignition[1159]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:10.289233 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:10.308550 ignition[1159]: PUT result: OK Jul 6 23:27:10.312191 ignition[1159]: parsed url from cmdline: "" Jul 6 23:27:10.312326 ignition[1159]: no config URL provided Jul 6 23:27:10.312346 ignition[1159]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:27:10.312370 ignition[1159]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:27:10.313353 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:10.318604 ignition[1159]: PUT result: OK Jul 6 23:27:10.322444 ignition[1159]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 6 23:27:10.325543 ignition[1159]: GET result: OK Jul 6 23:27:10.326679 ignition[1159]: parsing config with SHA512: e60f94957d232d90e91645a1fc10b64f8575cdc7ab3dc49ea41af8920a8fd0091d8d64b2a8824b38a82543473f8f57cd8e8c0ef08fb2a17752e23b9aff4a5b62 Jul 6 23:27:10.345627 unknown[1159]: fetched base config from "system" Jul 6 23:27:10.346296 ignition[1159]: fetch: fetch complete Jul 6 23:27:10.345648 unknown[1159]: fetched base config from "system" Jul 6 23:27:10.346317 ignition[1159]: fetch: fetch passed Jul 6 23:27:10.345661 unknown[1159]: fetched user config from "aws" Jul 6 23:27:10.346405 ignition[1159]: Ignition finished successfully Jul 6 23:27:10.362484 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:27:10.368370 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:27:10.425397 ignition[1166]: Ignition 2.21.0 Jul 6 23:27:10.425993 ignition[1166]: Stage: kargs Jul 6 23:27:10.426610 ignition[1166]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:10.426644 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:10.426822 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:10.436102 ignition[1166]: PUT result: OK Jul 6 23:27:10.440616 ignition[1166]: kargs: kargs passed Jul 6 23:27:10.442051 ignition[1166]: Ignition finished successfully Jul 6 23:27:10.448518 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:27:10.453079 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:27:10.490071 ignition[1172]: Ignition 2.21.0 Jul 6 23:27:10.490662 ignition[1172]: Stage: disks Jul 6 23:27:10.491220 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:10.491243 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:10.491445 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:10.501139 ignition[1172]: PUT result: OK Jul 6 23:27:10.510219 ignition[1172]: disks: disks passed Jul 6 23:27:10.510352 ignition[1172]: Ignition finished successfully Jul 6 23:27:10.515679 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:27:10.519762 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:27:10.520226 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:27:10.520962 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:27:10.524067 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:27:10.525235 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:27:10.527389 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:27:10.584828 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 6 23:27:10.590477 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:27:10.599536 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:27:10.723442 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8d88df29-f94d-4ab8-8fb6-af875603e6d4 r/w with ordered data mode. Quota mode: none. Jul 6 23:27:10.724479 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:27:10.728471 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:27:10.733844 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:27:10.743150 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:27:10.753985 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:27:10.754077 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:27:10.754133 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:27:10.771594 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:27:10.777982 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:27:10.792831 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Jul 6 23:27:10.797824 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:27:10.797878 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:27:10.799163 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 6 23:27:10.807382 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:27:11.200449 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:27:11.231464 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:27:11.239824 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:27:11.248715 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:27:11.552880 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:27:11.562169 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:27:11.578576 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:27:11.594086 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:27:11.601228 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:27:11.646960 ignition[1314]: INFO : Ignition 2.21.0 Jul 6 23:27:11.646960 ignition[1314]: INFO : Stage: mount Jul 6 23:27:11.651455 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:11.651455 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:11.651455 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:11.651455 ignition[1314]: INFO : PUT result: OK Jul 6 23:27:11.652044 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:27:11.668543 ignition[1314]: INFO : mount: mount passed Jul 6 23:27:11.668543 ignition[1314]: INFO : Ignition finished successfully Jul 6 23:27:11.677459 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:27:11.680006 systemd-networkd[1147]: eth0: Gained IPv6LL Jul 6 23:27:11.687534 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:27:11.728705 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:27:11.766468 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1325) Jul 6 23:27:11.770220 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:27:11.770259 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:27:11.771558 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 6 23:27:11.780043 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:27:11.825730 ignition[1342]: INFO : Ignition 2.21.0 Jul 6 23:27:11.825730 ignition[1342]: INFO : Stage: files Jul 6 23:27:11.830351 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:11.830351 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:11.830351 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:11.838204 ignition[1342]: INFO : PUT result: OK Jul 6 23:27:11.842509 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:27:11.855647 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:27:11.858854 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:27:11.883220 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:27:11.889274 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:27:11.889274 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:27:11.889274 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 6 23:27:11.889274 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 6 23:27:11.884955 unknown[1342]: wrote ssh authorized keys file for user: core Jul 6 23:27:11.976206 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:27:12.127171 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 6 23:27:12.127171 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:27:12.135562 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:27:12.135562 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:27:12.135562 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:27:12.135562 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:27:12.135562 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:27:12.135562 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:27:12.135562 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:27:12.163401 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:27:12.163401 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:27:12.163401 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:27:12.163401 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:27:12.163401 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:27:12.163401 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 6 23:27:12.879918 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:27:13.266072 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:27:13.270927 ignition[1342]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:27:13.278491 ignition[1342]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:27:13.283488 ignition[1342]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:27:13.283488 ignition[1342]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:27:13.283488 ignition[1342]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:27:13.283488 ignition[1342]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:27:13.296499 ignition[1342]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:27:13.296499 ignition[1342]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:27:13.296499 ignition[1342]: INFO : files: files passed Jul 6 23:27:13.296499 ignition[1342]: INFO : Ignition finished successfully Jul 6 23:27:13.309480 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:27:13.314712 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:27:13.320944 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:27:13.342505 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:27:13.344500 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:27:13.363392 initrd-setup-root-after-ignition[1372]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:27:13.363392 initrd-setup-root-after-ignition[1372]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:27:13.370597 initrd-setup-root-after-ignition[1376]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:27:13.379024 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:27:13.385825 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:27:13.392595 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:27:13.477499 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:27:13.478880 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:27:13.484704 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:27:13.489540 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:27:13.492053 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:27:13.497189 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:27:13.548270 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:27:13.554979 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:27:13.594613 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:27:13.597991 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:27:13.602547 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:27:13.605083 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:27:13.605643 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:27:13.617812 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:27:13.623644 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:27:13.627779 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:27:13.631397 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:27:13.638752 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:27:13.641656 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:27:13.648850 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:27:13.651475 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:27:13.659111 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:27:13.663727 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:27:13.666652 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:27:13.672155 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:27:13.672389 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:27:13.679852 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:27:13.684815 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:27:13.687629 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:27:13.692362 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:27:13.695220 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:27:13.695457 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:27:13.704808 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:27:13.705244 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:27:13.714201 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:27:13.714562 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:27:13.722115 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:27:13.725253 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:27:13.725612 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:27:13.739813 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:27:13.743929 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:27:13.744943 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:27:13.753577 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:27:13.754368 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:27:13.770674 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:27:13.771065 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:27:13.803271 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:27:13.809933 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:27:13.812079 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:27:13.817057 ignition[1396]: INFO : Ignition 2.21.0 Jul 6 23:27:13.817057 ignition[1396]: INFO : Stage: umount Jul 6 23:27:13.817057 ignition[1396]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:13.817057 ignition[1396]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:13.817057 ignition[1396]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:13.829713 ignition[1396]: INFO : PUT result: OK Jul 6 23:27:13.835892 ignition[1396]: INFO : umount: umount passed Jul 6 23:27:13.835892 ignition[1396]: INFO : Ignition finished successfully Jul 6 23:27:13.839454 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:27:13.842104 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:27:13.845636 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:27:13.845732 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:27:13.851578 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:27:13.851668 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:27:13.854550 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:27:13.854631 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:27:13.860669 systemd[1]: Stopped target network.target - Network. Jul 6 23:27:13.862840 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:27:13.862927 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:27:13.869723 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:27:13.872498 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:27:13.876330 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:27:13.882685 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:27:13.887769 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:27:13.890322 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:27:13.890394 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:27:13.894596 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:27:13.894666 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:27:13.897161 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:27:13.897257 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:27:13.905368 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:27:13.905482 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:27:13.909575 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:27:13.909664 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:27:13.912716 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:27:13.916817 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:27:13.934818 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:27:13.935085 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:27:13.941102 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:27:13.941603 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:27:13.941808 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:27:13.978600 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:27:13.979715 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 6 23:27:13.987275 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:27:13.987570 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:27:13.996388 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:27:13.999646 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:27:14.000332 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:27:14.005361 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:27:14.005539 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:27:14.015075 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:27:14.015180 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:27:14.017683 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:27:14.017770 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:27:14.032236 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:27:14.037598 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:27:14.037724 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:27:14.064083 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:27:14.066586 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:27:14.070183 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:27:14.070298 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:27:14.073952 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:27:14.074024 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:27:14.075653 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:27:14.075741 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:27:14.076412 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:27:14.076506 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:27:14.080540 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:27:14.080779 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:27:14.119232 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:27:14.126340 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 6 23:27:14.126502 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:27:14.147132 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:27:14.147241 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:27:14.150859 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:27:14.150943 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:14.164365 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 6 23:27:14.164518 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:27:14.164612 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:27:14.165367 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:27:14.165671 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:27:14.190883 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:27:14.192497 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:27:14.196210 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:27:14.202575 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:27:14.249086 systemd[1]: Switching root. Jul 6 23:27:14.289837 systemd-journald[256]: Journal stopped Jul 6 23:27:16.904190 systemd-journald[256]: Received SIGTERM from PID 1 (systemd). Jul 6 23:27:16.904317 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:27:16.904359 kernel: SELinux: policy capability open_perms=1 Jul 6 23:27:16.904390 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:27:16.904439 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:27:16.904476 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:27:16.904508 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:27:16.904537 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:27:16.904565 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:27:16.904594 kernel: SELinux: policy capability userspace_initial_context=0 Jul 6 23:27:16.904627 kernel: audit: type=1403 audit(1751844434.827:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:27:16.904664 systemd[1]: Successfully loaded SELinux policy in 99.546ms. Jul 6 23:27:16.904715 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.939ms. Jul 6 23:27:16.904750 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:27:16.904779 systemd[1]: Detected virtualization amazon. Jul 6 23:27:16.904809 systemd[1]: Detected architecture arm64. Jul 6 23:27:16.904839 systemd[1]: Detected first boot. Jul 6 23:27:16.904868 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:27:16.904899 zram_generator::config[1441]: No configuration found. Jul 6 23:27:16.904935 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:27:16.904964 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:27:16.904995 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:27:16.905048 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:27:16.905083 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:27:16.905115 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:27:16.905144 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:27:16.905175 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:27:16.905212 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:27:16.905242 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:27:16.905272 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:27:16.905303 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:27:16.905333 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:27:16.905365 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:27:16.905392 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:27:16.911599 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:27:16.911664 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:27:16.911706 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:27:16.911736 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:27:16.911768 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:27:16.911797 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:27:16.911827 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:27:16.911857 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:27:16.911885 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:27:16.911917 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:27:16.912008 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:27:16.912041 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:27:16.912073 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:27:16.912105 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:27:16.912134 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:27:16.912164 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:27:16.912194 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:27:16.912226 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:27:16.912260 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:27:16.912290 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:27:16.912319 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:27:16.912350 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:27:16.912378 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:27:16.912406 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:27:16.912461 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:27:16.912495 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:27:16.912524 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:27:16.912560 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:27:16.912590 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:27:16.912632 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:27:16.912661 systemd[1]: Reached target machines.target - Containers. Jul 6 23:27:16.912689 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:27:16.912718 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:27:16.912746 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:27:16.912774 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:27:16.912802 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:27:16.912835 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:27:16.912863 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:27:16.912891 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:27:16.912919 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:27:16.912948 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:27:16.912976 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:27:16.913004 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:27:16.913055 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:27:16.913093 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:27:16.913124 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:27:16.913153 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:27:16.913183 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:27:16.913216 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:27:16.913245 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:27:16.913277 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:27:16.913310 kernel: loop: module loaded Jul 6 23:27:16.913339 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:27:16.913372 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:27:16.913401 systemd[1]: Stopped verity-setup.service. Jul 6 23:27:16.916546 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:27:16.916594 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:27:16.916634 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:27:16.916670 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:27:16.916701 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:27:16.916732 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:27:16.916763 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:27:16.916791 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:27:16.916833 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:27:16.916861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:27:16.916890 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:27:16.916920 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:27:16.916950 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:27:16.916981 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:27:16.917009 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:27:16.917071 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:27:16.917101 kernel: ACPI: bus type drm_connector registered Jul 6 23:27:16.917134 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:27:16.917165 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:27:16.917192 kernel: fuse: init (API version 7.41) Jul 6 23:27:16.917219 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:27:16.917249 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:27:16.917277 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:27:16.917305 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:27:16.917340 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:27:16.917368 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:27:16.917401 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:27:16.920518 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:27:16.920568 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:27:16.920602 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:27:16.920633 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:27:16.920669 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:27:16.920698 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:27:16.921469 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:27:16.921536 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:27:16.921577 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:27:16.921608 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:27:16.921638 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:27:16.921667 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:27:16.921755 systemd-journald[1520]: Collecting audit messages is disabled. Jul 6 23:27:16.921808 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:27:16.921840 systemd-journald[1520]: Journal started Jul 6 23:27:16.921887 systemd-journald[1520]: Runtime Journal (/run/log/journal/ec2601905f54a46064d9c3e1607937dd) is 8M, max 75.3M, 67.3M free. Jul 6 23:27:16.927497 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:27:16.124154 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:27:16.147129 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 6 23:27:16.147985 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:27:16.933527 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:27:16.952673 kernel: loop0: detected capacity change from 0 to 107312 Jul 6 23:27:16.958870 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:27:16.989711 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:27:16.999916 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:27:17.007255 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:27:17.026244 systemd-journald[1520]: Time spent on flushing to /var/log/journal/ec2601905f54a46064d9c3e1607937dd is 89.535ms for 929 entries. Jul 6 23:27:17.026244 systemd-journald[1520]: System Journal (/var/log/journal/ec2601905f54a46064d9c3e1607937dd) is 8M, max 195.6M, 187.6M free. Jul 6 23:27:17.127952 systemd-journald[1520]: Received client request to flush runtime journal. Jul 6 23:27:17.128105 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:27:17.128154 kernel: loop1: detected capacity change from 0 to 61240 Jul 6 23:27:17.050523 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:27:17.060885 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:27:17.135527 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:27:17.156252 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:27:17.169730 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:27:17.175699 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:27:17.186908 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:27:17.219237 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:27:17.226064 kernel: loop2: detected capacity change from 0 to 138376 Jul 6 23:27:17.228865 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:27:17.296963 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Jul 6 23:27:17.298193 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Jul 6 23:27:17.321043 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:27:17.341473 kernel: loop3: detected capacity change from 0 to 207008 Jul 6 23:27:17.549466 kernel: loop4: detected capacity change from 0 to 107312 Jul 6 23:27:17.563524 kernel: loop5: detected capacity change from 0 to 61240 Jul 6 23:27:17.582513 kernel: loop6: detected capacity change from 0 to 138376 Jul 6 23:27:17.599470 kernel: loop7: detected capacity change from 0 to 207008 Jul 6 23:27:17.630492 (sd-merge)[1598]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 6 23:27:17.631525 (sd-merge)[1598]: Merged extensions into '/usr'. Jul 6 23:27:17.639521 systemd[1]: Reload requested from client PID 1547 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:27:17.639558 systemd[1]: Reloading... Jul 6 23:27:17.816579 zram_generator::config[1624]: No configuration found. Jul 6 23:27:18.040877 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:27:18.268207 systemd[1]: Reloading finished in 627 ms. Jul 6 23:27:18.290491 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:27:18.293791 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:27:18.311604 systemd[1]: Starting ensure-sysext.service... Jul 6 23:27:18.316673 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:27:18.323268 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:27:18.380668 systemd[1]: Reload requested from client PID 1676 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:27:18.380707 systemd[1]: Reloading... Jul 6 23:27:18.412925 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 6 23:27:18.413773 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 6 23:27:18.414728 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:27:18.415553 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:27:18.418182 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:27:18.419019 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Jul 6 23:27:18.419962 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Jul 6 23:27:18.429856 systemd-tmpfiles[1677]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:27:18.430768 systemd-tmpfiles[1677]: Skipping /boot Jul 6 23:27:18.456368 ldconfig[1540]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:27:18.474782 systemd-tmpfiles[1677]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:27:18.475030 systemd-udevd[1678]: Using default interface naming scheme 'v255'. Jul 6 23:27:18.476056 systemd-tmpfiles[1677]: Skipping /boot Jul 6 23:27:18.585476 zram_generator::config[1717]: No configuration found. Jul 6 23:27:18.896230 (udev-worker)[1728]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:27:18.914871 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:27:19.153606 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:27:19.154594 systemd[1]: Reloading finished in 773 ms. Jul 6 23:27:19.235917 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:27:19.239947 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:27:19.245470 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:27:19.302825 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:27:19.314772 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:27:19.320727 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:27:19.329935 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:27:19.387773 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:27:19.394452 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:27:19.544246 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:27:19.556571 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:19.561740 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:27:19.566755 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:27:19.602740 augenrules[1871]: No rules Jul 6 23:27:19.606310 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:27:19.608579 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:27:19.632020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:27:19.637662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:27:19.643250 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:27:19.649092 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:27:19.651556 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:27:19.651797 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:27:19.655990 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:27:19.658468 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:27:19.663523 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:27:19.716377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:27:19.718537 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:27:19.722481 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:27:19.722852 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:27:19.733583 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:27:19.733996 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:27:19.751651 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:27:19.761821 systemd[1]: Finished ensure-sysext.service. Jul 6 23:27:19.774745 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:27:19.776939 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:27:19.779837 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:27:19.782140 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:27:19.782225 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:27:19.782299 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:27:19.782394 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:27:19.782479 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:27:19.783349 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:27:19.839610 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:27:19.840562 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:27:19.896540 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:19.927385 augenrules[1918]: /sbin/augenrules: No change Jul 6 23:27:19.948468 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:27:19.979817 augenrules[1967]: No rules Jul 6 23:27:19.985041 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:27:19.987151 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:27:20.011840 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 6 23:27:20.022955 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:27:20.078502 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:27:20.137071 systemd-networkd[1820]: lo: Link UP Jul 6 23:27:20.137649 systemd-networkd[1820]: lo: Gained carrier Jul 6 23:27:20.138367 systemd-resolved[1831]: Positive Trust Anchors: Jul 6 23:27:20.138388 systemd-resolved[1831]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:27:20.138487 systemd-resolved[1831]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:27:20.141166 systemd-networkd[1820]: Enumeration completed Jul 6 23:27:20.141341 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:27:20.147043 systemd-networkd[1820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:27:20.147252 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:27:20.151736 systemd-networkd[1820]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:27:20.153649 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:27:20.159575 systemd-networkd[1820]: eth0: Link UP Jul 6 23:27:20.159861 systemd-networkd[1820]: eth0: Gained carrier Jul 6 23:27:20.159898 systemd-networkd[1820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:27:20.164081 systemd-resolved[1831]: Defaulting to hostname 'linux'. Jul 6 23:27:20.171379 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:27:20.173942 systemd[1]: Reached target network.target - Network. Jul 6 23:27:20.175931 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:27:20.178534 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:27:20.178697 systemd-networkd[1820]: eth0: DHCPv4 address 172.31.24.125/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 6 23:27:20.182783 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:27:20.188815 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:27:20.193519 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:27:20.196103 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:27:20.198987 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:27:20.201848 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:27:20.202033 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:27:20.204180 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:27:20.208028 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:27:20.215028 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:27:20.226231 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:27:20.230825 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:27:20.233460 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:27:20.244672 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:27:20.247637 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:27:20.259505 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:27:20.262689 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:27:20.265846 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:27:20.268047 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:27:20.271078 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:27:20.271272 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:27:20.275586 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:27:20.282755 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:27:20.289123 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:27:20.294714 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:27:20.303191 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:27:20.310807 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:27:20.313236 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:27:20.320888 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:27:20.331821 systemd[1]: Started ntpd.service - Network Time Service. Jul 6 23:27:20.340884 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:27:20.355686 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 6 23:27:20.364851 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:27:20.374553 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:27:20.389034 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:27:20.393713 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:27:20.410840 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:27:20.416707 jq[1989]: false Jul 6 23:27:20.420261 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:27:20.426724 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:27:20.438250 extend-filesystems[1990]: Found /dev/nvme0n1p6 Jul 6 23:27:20.448489 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:27:20.456844 jq[2004]: true Jul 6 23:27:20.453179 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:27:20.453676 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:27:20.521582 extend-filesystems[1990]: Found /dev/nvme0n1p9 Jul 6 23:27:20.536766 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:27:20.537387 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:27:20.547259 extend-filesystems[1990]: Checking size of /dev/nvme0n1p9 Jul 6 23:27:20.553569 update_engine[2003]: I20250706 23:27:20.546568 2003 main.cc:92] Flatcar Update Engine starting Jul 6 23:27:20.572110 dbus-daemon[1987]: [system] SELinux support is enabled Jul 6 23:27:20.576890 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:27:20.591901 tar[2007]: linux-arm64/LICENSE Jul 6 23:27:20.592323 tar[2007]: linux-arm64/helm Jul 6 23:27:20.593956 dbus-daemon[1987]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1820 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 6 23:27:20.594071 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:27:20.594128 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:27:20.597651 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:27:20.597690 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:27:20.608673 update_engine[2003]: I20250706 23:27:20.607729 2003 update_check_scheduler.cc:74] Next update check in 2m44s Jul 6 23:27:20.615693 extend-filesystems[1990]: Resized partition /dev/nvme0n1p9 Jul 6 23:27:20.618743 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:27:20.619336 dbus-daemon[1987]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 6 23:27:20.632342 extend-filesystems[2040]: resize2fs 1.47.2 (1-Jan-2025) Jul 6 23:27:20.634716 jq[2009]: true Jul 6 23:27:20.639895 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 6 23:27:20.674030 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 6 23:27:20.681053 (ntainerd)[2034]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:27:20.709446 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:27:20.713095 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:27:20.716635 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:27:20.731183 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 6 23:27:20.773595 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 6 23:27:20.769801 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:27:20.782553 ntpd[1992]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:18:00 UTC 2025 (1): Starting Jul 6 23:27:20.792052 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:18:00 UTC 2025 (1): Starting Jul 6 23:27:20.792052 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 6 23:27:20.792052 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: ---------------------------------------------------- Jul 6 23:27:20.792052 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: ntp-4 is maintained by Network Time Foundation, Jul 6 23:27:20.792052 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 6 23:27:20.792052 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: corporation. Support and training for ntp-4 are Jul 6 23:27:20.792052 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: available at https://www.nwtime.org/support Jul 6 23:27:20.792052 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: ---------------------------------------------------- Jul 6 23:27:20.782605 ntpd[1992]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 6 23:27:20.782625 ntpd[1992]: ---------------------------------------------------- Jul 6 23:27:20.782641 ntpd[1992]: ntp-4 is maintained by Network Time Foundation, Jul 6 23:27:20.782658 ntpd[1992]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 6 23:27:20.782675 ntpd[1992]: corporation. Support and training for ntp-4 are Jul 6 23:27:20.782692 ntpd[1992]: available at https://www.nwtime.org/support Jul 6 23:27:20.782707 ntpd[1992]: ---------------------------------------------------- Jul 6 23:27:20.795380 ntpd[1992]: proto: precision = 0.096 usec (-23) Jul 6 23:27:20.795824 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: proto: precision = 0.096 usec (-23) Jul 6 23:27:20.798447 extend-filesystems[2040]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 6 23:27:20.798447 extend-filesystems[2040]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:27:20.798447 extend-filesystems[2040]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 6 23:27:20.812780 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: basedate set to 2025-06-24 Jul 6 23:27:20.812780 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: gps base set to 2025-06-29 (week 2373) Jul 6 23:27:20.812780 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: Listen and drop on 0 v6wildcard [::]:123 Jul 6 23:27:20.812780 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 6 23:27:20.812780 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: Listen normally on 2 lo 127.0.0.1:123 Jul 6 23:27:20.812780 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: Listen normally on 3 eth0 172.31.24.125:123 Jul 6 23:27:20.812780 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: Listen normally on 4 lo [::1]:123 Jul 6 23:27:20.812780 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: bind(21) AF_INET6 fe80::4c1:e4ff:fef6:f557%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:27:20.812780 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: unable to create socket on eth0 (5) for fe80::4c1:e4ff:fef6:f557%2#123 Jul 6 23:27:20.812780 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: failed to init interface for address fe80::4c1:e4ff:fef6:f557%2 Jul 6 23:27:20.812780 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: Listening on routing socket on fd #21 for interface updates Jul 6 23:27:20.796366 ntpd[1992]: basedate set to 2025-06-24 Jul 6 23:27:20.813372 extend-filesystems[1990]: Resized filesystem in /dev/nvme0n1p9 Jul 6 23:27:20.808721 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:27:20.796395 ntpd[1992]: gps base set to 2025-06-29 (week 2373) Jul 6 23:27:20.809193 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:27:20.803141 ntpd[1992]: Listen and drop on 0 v6wildcard [::]:123 Jul 6 23:27:20.803216 ntpd[1992]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 6 23:27:20.803522 ntpd[1992]: Listen normally on 2 lo 127.0.0.1:123 Jul 6 23:27:20.803581 ntpd[1992]: Listen normally on 3 eth0 172.31.24.125:123 Jul 6 23:27:20.803646 ntpd[1992]: Listen normally on 4 lo [::1]:123 Jul 6 23:27:20.810464 ntpd[1992]: bind(21) AF_INET6 fe80::4c1:e4ff:fef6:f557%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:27:20.810540 ntpd[1992]: unable to create socket on eth0 (5) for fe80::4c1:e4ff:fef6:f557%2#123 Jul 6 23:27:20.810567 ntpd[1992]: failed to init interface for address fe80::4c1:e4ff:fef6:f557%2 Jul 6 23:27:20.810642 ntpd[1992]: Listening on routing socket on fd #21 for interface updates Jul 6 23:27:20.829490 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:27:20.830121 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:27:20.830121 ntpd[1992]: 6 Jul 23:27:20 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:27:20.829535 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:27:20.902723 systemd-logind[2000]: Watching system buttons on /dev/input/event0 (Power Button) Jul 6 23:27:20.903264 systemd-logind[2000]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 6 23:27:20.904257 systemd-logind[2000]: New seat seat0. Jul 6 23:27:20.906273 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:27:20.930682 coreos-metadata[1986]: Jul 06 23:27:20.930 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 6 23:27:20.943028 coreos-metadata[1986]: Jul 06 23:27:20.942 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 6 23:27:20.943028 coreos-metadata[1986]: Jul 06 23:27:20.942 INFO Fetch successful Jul 6 23:27:20.943028 coreos-metadata[1986]: Jul 06 23:27:20.942 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 6 23:27:20.947543 coreos-metadata[1986]: Jul 06 23:27:20.947 INFO Fetch successful Jul 6 23:27:20.947543 coreos-metadata[1986]: Jul 06 23:27:20.947 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 6 23:27:20.949568 coreos-metadata[1986]: Jul 06 23:27:20.949 INFO Fetch successful Jul 6 23:27:20.949568 coreos-metadata[1986]: Jul 06 23:27:20.949 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 6 23:27:20.953816 coreos-metadata[1986]: Jul 06 23:27:20.953 INFO Fetch successful Jul 6 23:27:20.954091 coreos-metadata[1986]: Jul 06 23:27:20.953 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 6 23:27:20.960578 coreos-metadata[1986]: Jul 06 23:27:20.960 INFO Fetch failed with 404: resource not found Jul 6 23:27:20.961102 coreos-metadata[1986]: Jul 06 23:27:20.960 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 6 23:27:20.965890 coreos-metadata[1986]: Jul 06 23:27:20.965 INFO Fetch successful Jul 6 23:27:20.965890 coreos-metadata[1986]: Jul 06 23:27:20.965 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 6 23:27:20.968611 coreos-metadata[1986]: Jul 06 23:27:20.968 INFO Fetch successful Jul 6 23:27:20.968611 coreos-metadata[1986]: Jul 06 23:27:20.968 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 6 23:27:20.975118 coreos-metadata[1986]: Jul 06 23:27:20.974 INFO Fetch successful Jul 6 23:27:20.975593 coreos-metadata[1986]: Jul 06 23:27:20.975 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 6 23:27:20.979551 coreos-metadata[1986]: Jul 06 23:27:20.979 INFO Fetch successful Jul 6 23:27:20.979551 coreos-metadata[1986]: Jul 06 23:27:20.979 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 6 23:27:20.981297 coreos-metadata[1986]: Jul 06 23:27:20.981 INFO Fetch successful Jul 6 23:27:21.018756 bash[2074]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:27:21.024500 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:27:21.032510 systemd[1]: Starting sshkeys.service... Jul 6 23:27:21.100540 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:27:21.103758 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:27:21.113801 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:27:21.122956 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:27:21.201207 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 6 23:27:21.205185 dbus-daemon[1987]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 6 23:27:21.211162 dbus-daemon[1987]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2041 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 6 23:27:21.229041 systemd[1]: Starting polkit.service - Authorization Manager... Jul 6 23:27:21.302132 locksmithd[2043]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:27:21.334789 systemd-networkd[1820]: eth0: Gained IPv6LL Jul 6 23:27:21.348467 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:27:21.355284 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:27:21.365093 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 6 23:27:21.376145 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:21.383138 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:27:21.493810 coreos-metadata[2100]: Jul 06 23:27:21.493 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 6 23:27:21.499510 coreos-metadata[2100]: Jul 06 23:27:21.499 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 6 23:27:21.509453 coreos-metadata[2100]: Jul 06 23:27:21.508 INFO Fetch successful Jul 6 23:27:21.509453 coreos-metadata[2100]: Jul 06 23:27:21.508 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 6 23:27:21.512565 coreos-metadata[2100]: Jul 06 23:27:21.512 INFO Fetch successful Jul 6 23:27:21.520072 unknown[2100]: wrote ssh authorized keys file for user: core Jul 6 23:27:21.644499 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:27:21.678380 polkitd[2105]: Started polkitd version 126 Jul 6 23:27:21.689736 update-ssh-keys[2172]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:27:21.693079 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:27:21.711474 systemd[1]: Finished sshkeys.service. Jul 6 23:27:21.825210 polkitd[2105]: Loading rules from directory /etc/polkit-1/rules.d Jul 6 23:27:21.825863 polkitd[2105]: Loading rules from directory /run/polkit-1/rules.d Jul 6 23:27:21.825944 polkitd[2105]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 6 23:27:21.831942 polkitd[2105]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 6 23:27:21.834957 polkitd[2105]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 6 23:27:21.835046 polkitd[2105]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 6 23:27:21.842653 polkitd[2105]: Finished loading, compiling and executing 2 rules Jul 6 23:27:21.846030 systemd[1]: Started polkit.service - Authorization Manager. Jul 6 23:27:21.859254 dbus-daemon[1987]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 6 23:27:21.864525 containerd[2034]: time="2025-07-06T23:27:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 6 23:27:21.862979 polkitd[2105]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 6 23:27:21.868099 containerd[2034]: time="2025-07-06T23:27:21.868040929Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 6 23:27:21.896154 amazon-ssm-agent[2145]: Initializing new seelog logger Jul 6 23:27:21.896154 amazon-ssm-agent[2145]: New Seelog Logger Creation Complete Jul 6 23:27:21.896154 amazon-ssm-agent[2145]: 2025/07/06 23:27:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:21.896154 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:21.896154 amazon-ssm-agent[2145]: 2025/07/06 23:27:21 processing appconfig overrides Jul 6 23:27:21.901212 amazon-ssm-agent[2145]: 2025/07/06 23:27:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:21.901212 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:21.901212 amazon-ssm-agent[2145]: 2025/07/06 23:27:21 processing appconfig overrides Jul 6 23:27:21.908342 amazon-ssm-agent[2145]: 2025-07-06 23:27:21.8984 INFO Proxy environment variables: Jul 6 23:27:21.910816 amazon-ssm-agent[2145]: 2025/07/06 23:27:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:21.910816 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:21.910816 amazon-ssm-agent[2145]: 2025/07/06 23:27:21 processing appconfig overrides Jul 6 23:27:21.917958 amazon-ssm-agent[2145]: 2025/07/06 23:27:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:21.917958 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:21.919776 amazon-ssm-agent[2145]: 2025/07/06 23:27:21 processing appconfig overrides Jul 6 23:27:21.990975 containerd[2034]: time="2025-07-06T23:27:21.989777977Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.012µs" Jul 6 23:27:21.990975 containerd[2034]: time="2025-07-06T23:27:21.989835877Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 6 23:27:21.990975 containerd[2034]: time="2025-07-06T23:27:21.989872825Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 6 23:27:21.990975 containerd[2034]: time="2025-07-06T23:27:21.990197773Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 6 23:27:21.990975 containerd[2034]: time="2025-07-06T23:27:21.990238225Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 6 23:27:21.990975 containerd[2034]: time="2025-07-06T23:27:21.990293617Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:27:21.994579 containerd[2034]: time="2025-07-06T23:27:21.990412477Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:27:21.994693 systemd-hostnamed[2041]: Hostname set to (transient) Jul 6 23:27:21.995821 systemd-resolved[1831]: System hostname changed to 'ip-172-31-24-125'. Jul 6 23:27:22.000302 containerd[2034]: time="2025-07-06T23:27:21.998839717Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:27:22.000302 containerd[2034]: time="2025-07-06T23:27:21.999312337Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:27:22.000302 containerd[2034]: time="2025-07-06T23:27:21.999347089Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:27:22.000302 containerd[2034]: time="2025-07-06T23:27:21.999373489Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:27:22.000302 containerd[2034]: time="2025-07-06T23:27:21.999395941Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 6 23:27:22.000302 containerd[2034]: time="2025-07-06T23:27:21.999613537Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 6 23:27:22.000302 containerd[2034]: time="2025-07-06T23:27:21.999980389Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:27:22.000302 containerd[2034]: time="2025-07-06T23:27:22.000037845Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:27:22.000302 containerd[2034]: time="2025-07-06T23:27:22.000064737Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 6 23:27:22.004538 containerd[2034]: time="2025-07-06T23:27:22.003154389Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 6 23:27:22.004538 containerd[2034]: time="2025-07-06T23:27:22.003844389Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 6 23:27:22.004538 containerd[2034]: time="2025-07-06T23:27:22.004026201Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:27:22.010536 amazon-ssm-agent[2145]: 2025-07-06 23:27:21.8985 INFO https_proxy: Jul 6 23:27:22.020460 containerd[2034]: time="2025-07-06T23:27:22.019853086Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 6 23:27:22.020460 containerd[2034]: time="2025-07-06T23:27:22.019955950Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 6 23:27:22.020460 containerd[2034]: time="2025-07-06T23:27:22.019986994Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 6 23:27:22.020460 containerd[2034]: time="2025-07-06T23:27:22.020016574Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 6 23:27:22.020460 containerd[2034]: time="2025-07-06T23:27:22.020045362Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 6 23:27:22.020460 containerd[2034]: time="2025-07-06T23:27:22.020074786Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 6 23:27:22.020460 containerd[2034]: time="2025-07-06T23:27:22.020103430Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 6 23:27:22.020460 containerd[2034]: time="2025-07-06T23:27:22.020133730Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 6 23:27:22.020460 containerd[2034]: time="2025-07-06T23:27:22.020164162Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 6 23:27:22.020460 containerd[2034]: time="2025-07-06T23:27:22.020190238Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 6 23:27:22.020460 containerd[2034]: time="2025-07-06T23:27:22.020214406Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 6 23:27:22.020460 containerd[2034]: time="2025-07-06T23:27:22.020243986Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 6 23:27:22.024145 containerd[2034]: time="2025-07-06T23:27:22.022214122Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 6 23:27:22.024145 containerd[2034]: time="2025-07-06T23:27:22.022278658Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 6 23:27:22.024145 containerd[2034]: time="2025-07-06T23:27:22.022318990Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 6 23:27:22.024145 containerd[2034]: time="2025-07-06T23:27:22.022346830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 6 23:27:22.024145 containerd[2034]: time="2025-07-06T23:27:22.022376194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 6 23:27:22.024145 containerd[2034]: time="2025-07-06T23:27:22.022405150Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 6 23:27:22.024145 containerd[2034]: time="2025-07-06T23:27:22.023522062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 6 23:27:22.024145 containerd[2034]: time="2025-07-06T23:27:22.023562058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 6 23:27:22.024145 containerd[2034]: time="2025-07-06T23:27:22.023591110Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 6 23:27:22.024145 containerd[2034]: time="2025-07-06T23:27:22.023618530Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 6 23:27:22.024145 containerd[2034]: time="2025-07-06T23:27:22.023649538Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 6 23:27:22.024145 containerd[2034]: time="2025-07-06T23:27:22.024042370Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 6 23:27:22.024145 containerd[2034]: time="2025-07-06T23:27:22.024074962Z" level=info msg="Start snapshots syncer" Jul 6 23:27:22.028984 containerd[2034]: time="2025-07-06T23:27:22.027239086Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 6 23:27:22.028984 containerd[2034]: time="2025-07-06T23:27:22.027720502Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 6 23:27:22.029300 containerd[2034]: time="2025-07-06T23:27:22.027823282Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 6 23:27:22.029300 containerd[2034]: time="2025-07-06T23:27:22.027976738Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 6 23:27:22.029300 containerd[2034]: time="2025-07-06T23:27:22.028214926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 6 23:27:22.029300 containerd[2034]: time="2025-07-06T23:27:22.028265398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 6 23:27:22.029300 containerd[2034]: time="2025-07-06T23:27:22.028294618Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 6 23:27:22.029300 containerd[2034]: time="2025-07-06T23:27:22.028327474Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 6 23:27:22.029300 containerd[2034]: time="2025-07-06T23:27:22.028360870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 6 23:27:22.029300 containerd[2034]: time="2025-07-06T23:27:22.028388098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 6 23:27:22.033560 containerd[2034]: time="2025-07-06T23:27:22.028414306Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 6 23:27:22.033560 containerd[2034]: time="2025-07-06T23:27:22.032789794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 6 23:27:22.033560 containerd[2034]: time="2025-07-06T23:27:22.032824474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 6 23:27:22.033560 containerd[2034]: time="2025-07-06T23:27:22.032853274Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 6 23:27:22.033560 containerd[2034]: time="2025-07-06T23:27:22.032943562Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:27:22.033560 containerd[2034]: time="2025-07-06T23:27:22.032982442Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:27:22.033560 containerd[2034]: time="2025-07-06T23:27:22.033091894Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:27:22.033560 containerd[2034]: time="2025-07-06T23:27:22.033118414Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:27:22.033560 containerd[2034]: time="2025-07-06T23:27:22.033138802Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 6 23:27:22.033560 containerd[2034]: time="2025-07-06T23:27:22.033163114Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 6 23:27:22.033560 containerd[2034]: time="2025-07-06T23:27:22.033190150Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 6 23:27:22.033560 containerd[2034]: time="2025-07-06T23:27:22.033359698Z" level=info msg="runtime interface created" Jul 6 23:27:22.033560 containerd[2034]: time="2025-07-06T23:27:22.033376834Z" level=info msg="created NRI interface" Jul 6 23:27:22.033560 containerd[2034]: time="2025-07-06T23:27:22.033398026Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 6 23:27:22.033560 containerd[2034]: time="2025-07-06T23:27:22.033445450Z" level=info msg="Connect containerd service" Jul 6 23:27:22.034240 containerd[2034]: time="2025-07-06T23:27:22.033502642Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:27:22.039014 containerd[2034]: time="2025-07-06T23:27:22.038355814Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:27:22.112565 amazon-ssm-agent[2145]: 2025-07-06 23:27:21.9004 INFO http_proxy: Jul 6 23:27:22.217651 amazon-ssm-agent[2145]: 2025-07-06 23:27:21.9005 INFO no_proxy: Jul 6 23:27:22.313219 amazon-ssm-agent[2145]: 2025-07-06 23:27:21.9007 INFO Checking if agent identity type OnPrem can be assumed Jul 6 23:27:22.413503 amazon-ssm-agent[2145]: 2025-07-06 23:27:21.9008 INFO Checking if agent identity type EC2 can be assumed Jul 6 23:27:22.474220 containerd[2034]: time="2025-07-06T23:27:22.472317540Z" level=info msg="Start subscribing containerd event" Jul 6 23:27:22.476171 containerd[2034]: time="2025-07-06T23:27:22.472653840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:27:22.476381 containerd[2034]: time="2025-07-06T23:27:22.475961076Z" level=info msg="Start recovering state" Jul 6 23:27:22.479527 containerd[2034]: time="2025-07-06T23:27:22.476559024Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:27:22.479527 containerd[2034]: time="2025-07-06T23:27:22.478695012Z" level=info msg="Start event monitor" Jul 6 23:27:22.479527 containerd[2034]: time="2025-07-06T23:27:22.478724112Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:27:22.479527 containerd[2034]: time="2025-07-06T23:27:22.478768788Z" level=info msg="Start streaming server" Jul 6 23:27:22.479527 containerd[2034]: time="2025-07-06T23:27:22.478793412Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 6 23:27:22.479527 containerd[2034]: time="2025-07-06T23:27:22.478812144Z" level=info msg="runtime interface starting up..." Jul 6 23:27:22.479527 containerd[2034]: time="2025-07-06T23:27:22.478853136Z" level=info msg="starting plugins..." Jul 6 23:27:22.479527 containerd[2034]: time="2025-07-06T23:27:22.478882428Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 6 23:27:22.481102 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:27:22.485884 containerd[2034]: time="2025-07-06T23:27:22.485481216Z" level=info msg="containerd successfully booted in 0.625164s" Jul 6 23:27:22.511113 amazon-ssm-agent[2145]: 2025-07-06 23:27:22.1671 INFO Agent will take identity from EC2 Jul 6 23:27:22.610444 amazon-ssm-agent[2145]: 2025-07-06 23:27:22.1717 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 6 23:27:22.710521 amazon-ssm-agent[2145]: 2025-07-06 23:27:22.1717 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 6 23:27:22.800905 sshd_keygen[2038]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:27:22.808626 amazon-ssm-agent[2145]: 2025-07-06 23:27:22.1717 INFO [amazon-ssm-agent] Starting Core Agent Jul 6 23:27:22.865240 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:27:22.876247 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:27:22.883756 systemd[1]: Started sshd@0-172.31.24.125:22-139.178.89.65:37936.service - OpenSSH per-connection server daemon (139.178.89.65:37936). Jul 6 23:27:22.908672 amazon-ssm-agent[2145]: 2025-07-06 23:27:22.1717 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 6 23:27:22.924058 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:27:22.925247 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:27:22.937615 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:27:23.001216 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:27:23.008974 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:27:23.014467 amazon-ssm-agent[2145]: 2025-07-06 23:27:22.1717 INFO [Registrar] Starting registrar module Jul 6 23:27:23.015067 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:27:23.017941 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:27:23.114290 amazon-ssm-agent[2145]: 2025-07-06 23:27:22.1733 INFO [EC2Identity] Checking disk for registration info Jul 6 23:27:23.165108 tar[2007]: linux-arm64/README.md Jul 6 23:27:23.190507 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:27:23.202452 sshd[2238]: Accepted publickey for core from 139.178.89.65 port 37936 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:23.208060 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:23.214351 amazon-ssm-agent[2145]: 2025-07-06 23:27:22.1734 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 6 23:27:23.227399 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:27:23.232753 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:27:23.264510 systemd-logind[2000]: New session 1 of user core. Jul 6 23:27:23.290569 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:27:23.299860 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:27:23.315202 amazon-ssm-agent[2145]: 2025-07-06 23:27:22.1734 INFO [EC2Identity] Generating registration keypair Jul 6 23:27:23.323167 (systemd)[2252]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:27:23.330178 systemd-logind[2000]: New session c1 of user core. Jul 6 23:27:23.648546 systemd[2252]: Queued start job for default target default.target. Jul 6 23:27:23.659346 systemd[2252]: Created slice app.slice - User Application Slice. Jul 6 23:27:23.659402 systemd[2252]: Reached target paths.target - Paths. Jul 6 23:27:23.659983 systemd[2252]: Reached target timers.target - Timers. Jul 6 23:27:23.662936 systemd[2252]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:27:23.688010 systemd[2252]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:27:23.688139 systemd[2252]: Reached target sockets.target - Sockets. Jul 6 23:27:23.688239 systemd[2252]: Reached target basic.target - Basic System. Jul 6 23:27:23.688328 systemd[2252]: Reached target default.target - Main User Target. Jul 6 23:27:23.688389 systemd[2252]: Startup finished in 340ms. Jul 6 23:27:23.688657 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:27:23.700708 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:27:23.783739 ntpd[1992]: Listen normally on 6 eth0 [fe80::4c1:e4ff:fef6:f557%2]:123 Jul 6 23:27:23.784218 ntpd[1992]: 6 Jul 23:27:23 ntpd[1992]: Listen normally on 6 eth0 [fe80::4c1:e4ff:fef6:f557%2]:123 Jul 6 23:27:23.874854 systemd[1]: Started sshd@1-172.31.24.125:22-139.178.89.65:37944.service - OpenSSH per-connection server daemon (139.178.89.65:37944). Jul 6 23:27:24.098561 sshd[2263]: Accepted publickey for core from 139.178.89.65 port 37944 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:24.101937 sshd-session[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:24.112317 amazon-ssm-agent[2145]: 2025-07-06 23:27:24.1109 INFO [EC2Identity] Checking write access before registering Jul 6 23:27:24.116049 systemd-logind[2000]: New session 2 of user core. Jul 6 23:27:24.122723 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:27:24.148348 amazon-ssm-agent[2145]: 2025/07/06 23:27:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:24.148348 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:24.148348 amazon-ssm-agent[2145]: 2025/07/06 23:27:24 processing appconfig overrides Jul 6 23:27:24.174719 amazon-ssm-agent[2145]: 2025-07-06 23:27:24.1117 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 6 23:27:24.174719 amazon-ssm-agent[2145]: 2025-07-06 23:27:24.1474 INFO [EC2Identity] EC2 registration was successful. Jul 6 23:27:24.174719 amazon-ssm-agent[2145]: 2025-07-06 23:27:24.1475 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 6 23:27:24.174912 amazon-ssm-agent[2145]: 2025-07-06 23:27:24.1476 INFO [CredentialRefresher] credentialRefresher has started Jul 6 23:27:24.174912 amazon-ssm-agent[2145]: 2025-07-06 23:27:24.1476 INFO [CredentialRefresher] Starting credentials refresher loop Jul 6 23:27:24.174912 amazon-ssm-agent[2145]: 2025-07-06 23:27:24.1743 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 6 23:27:24.174912 amazon-ssm-agent[2145]: 2025-07-06 23:27:24.1746 INFO [CredentialRefresher] Credentials ready Jul 6 23:27:24.213193 amazon-ssm-agent[2145]: 2025-07-06 23:27:24.1748 INFO [CredentialRefresher] Next credential rotation will be in 29.9999917864 minutes Jul 6 23:27:24.251020 sshd[2265]: Connection closed by 139.178.89.65 port 37944 Jul 6 23:27:24.251694 sshd-session[2263]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:24.258268 systemd[1]: sshd@1-172.31.24.125:22-139.178.89.65:37944.service: Deactivated successfully. Jul 6 23:27:24.262181 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:27:24.266299 systemd-logind[2000]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:27:24.268358 systemd-logind[2000]: Removed session 2. Jul 6 23:27:24.286105 systemd[1]: Started sshd@2-172.31.24.125:22-139.178.89.65:37960.service - OpenSSH per-connection server daemon (139.178.89.65:37960). Jul 6 23:27:24.496153 sshd[2271]: Accepted publickey for core from 139.178.89.65 port 37960 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:24.499287 sshd-session[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:24.509502 systemd-logind[2000]: New session 3 of user core. Jul 6 23:27:24.511704 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:27:24.639465 sshd[2273]: Connection closed by 139.178.89.65 port 37960 Jul 6 23:27:24.640591 sshd-session[2271]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:24.645546 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:27:24.646972 systemd[1]: sshd@2-172.31.24.125:22-139.178.89.65:37960.service: Deactivated successfully. Jul 6 23:27:24.653945 systemd-logind[2000]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:27:24.657321 systemd-logind[2000]: Removed session 3. Jul 6 23:27:25.203005 amazon-ssm-agent[2145]: 2025-07-06 23:27:25.2028 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 6 23:27:25.303548 amazon-ssm-agent[2145]: 2025-07-06 23:27:25.2062 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2280) started Jul 6 23:27:25.404435 amazon-ssm-agent[2145]: 2025-07-06 23:27:25.2063 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 6 23:27:25.559961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:25.568460 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:27:25.572738 systemd[1]: Startup finished in 3.773s (kernel) + 9.106s (initrd) + 10.845s (userspace) = 23.725s. Jul 6 23:27:25.581032 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:27:27.418335 kubelet[2296]: E0706 23:27:27.418239 2296 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:27:27.422616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:27:27.422951 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:27:27.423931 systemd[1]: kubelet.service: Consumed 1.449s CPU time, 258.3M memory peak. Jul 6 23:27:28.272480 systemd-resolved[1831]: Clock change detected. Flushing caches. Jul 6 23:27:35.169059 systemd[1]: Started sshd@3-172.31.24.125:22-139.178.89.65:44454.service - OpenSSH per-connection server daemon (139.178.89.65:44454). Jul 6 23:27:35.366634 sshd[2308]: Accepted publickey for core from 139.178.89.65 port 44454 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:35.369074 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:35.378025 systemd-logind[2000]: New session 4 of user core. Jul 6 23:27:35.382205 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:27:35.507421 sshd[2310]: Connection closed by 139.178.89.65 port 44454 Jul 6 23:27:35.508227 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:35.514929 systemd[1]: sshd@3-172.31.24.125:22-139.178.89.65:44454.service: Deactivated successfully. Jul 6 23:27:35.519310 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:27:35.524406 systemd-logind[2000]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:27:35.527367 systemd-logind[2000]: Removed session 4. Jul 6 23:27:35.552835 systemd[1]: Started sshd@4-172.31.24.125:22-139.178.89.65:44458.service - OpenSSH per-connection server daemon (139.178.89.65:44458). Jul 6 23:27:35.759800 sshd[2316]: Accepted publickey for core from 139.178.89.65 port 44458 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:35.762289 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:35.771979 systemd-logind[2000]: New session 5 of user core. Jul 6 23:27:35.778209 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:27:35.897650 sshd[2318]: Connection closed by 139.178.89.65 port 44458 Jul 6 23:27:35.898632 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:35.905561 systemd[1]: sshd@4-172.31.24.125:22-139.178.89.65:44458.service: Deactivated successfully. Jul 6 23:27:35.909382 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:27:35.913047 systemd-logind[2000]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:27:35.915807 systemd-logind[2000]: Removed session 5. Jul 6 23:27:35.942374 systemd[1]: Started sshd@5-172.31.24.125:22-139.178.89.65:44466.service - OpenSSH per-connection server daemon (139.178.89.65:44466). Jul 6 23:27:36.153480 sshd[2324]: Accepted publickey for core from 139.178.89.65 port 44466 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:36.155906 sshd-session[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:36.167054 systemd-logind[2000]: New session 6 of user core. Jul 6 23:27:36.174196 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:27:36.301136 sshd[2326]: Connection closed by 139.178.89.65 port 44466 Jul 6 23:27:36.301593 sshd-session[2324]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:36.308461 systemd[1]: sshd@5-172.31.24.125:22-139.178.89.65:44466.service: Deactivated successfully. Jul 6 23:27:36.311277 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:27:36.312785 systemd-logind[2000]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:27:36.315910 systemd-logind[2000]: Removed session 6. Jul 6 23:27:36.339826 systemd[1]: Started sshd@6-172.31.24.125:22-139.178.89.65:44470.service - OpenSSH per-connection server daemon (139.178.89.65:44470). Jul 6 23:27:36.540792 sshd[2332]: Accepted publickey for core from 139.178.89.65 port 44470 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:36.543273 sshd-session[2332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:36.552024 systemd-logind[2000]: New session 7 of user core. Jul 6 23:27:36.556215 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:27:36.673724 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:27:36.674892 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:27:36.693462 sudo[2335]: pam_unix(sudo:session): session closed for user root Jul 6 23:27:36.717255 sshd[2334]: Connection closed by 139.178.89.65 port 44470 Jul 6 23:27:36.718285 sshd-session[2332]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:36.725644 systemd[1]: sshd@6-172.31.24.125:22-139.178.89.65:44470.service: Deactivated successfully. Jul 6 23:27:36.729806 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:27:36.731472 systemd-logind[2000]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:27:36.735676 systemd-logind[2000]: Removed session 7. Jul 6 23:27:36.757519 systemd[1]: Started sshd@7-172.31.24.125:22-139.178.89.65:44478.service - OpenSSH per-connection server daemon (139.178.89.65:44478). Jul 6 23:27:36.966778 sshd[2341]: Accepted publickey for core from 139.178.89.65 port 44478 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:36.969398 sshd-session[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:36.977116 systemd-logind[2000]: New session 8 of user core. Jul 6 23:27:36.988210 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:27:37.090931 sudo[2345]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:27:37.091562 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:27:37.098924 sudo[2345]: pam_unix(sudo:session): session closed for user root Jul 6 23:27:37.108424 sudo[2344]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:27:37.109545 sudo[2344]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:27:37.125381 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:27:37.195746 augenrules[2367]: No rules Jul 6 23:27:37.198313 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:27:37.200030 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:27:37.202552 sudo[2344]: pam_unix(sudo:session): session closed for user root Jul 6 23:27:37.225928 sshd[2343]: Connection closed by 139.178.89.65 port 44478 Jul 6 23:27:37.225724 sshd-session[2341]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:37.233604 systemd-logind[2000]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:27:37.235225 systemd[1]: sshd@7-172.31.24.125:22-139.178.89.65:44478.service: Deactivated successfully. Jul 6 23:27:37.238720 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:27:37.241784 systemd-logind[2000]: Removed session 8. Jul 6 23:27:37.262482 systemd[1]: Started sshd@8-172.31.24.125:22-139.178.89.65:44494.service - OpenSSH per-connection server daemon (139.178.89.65:44494). Jul 6 23:27:37.461687 sshd[2376]: Accepted publickey for core from 139.178.89.65 port 44494 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:37.464172 sshd-session[2376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:37.472014 systemd-logind[2000]: New session 9 of user core. Jul 6 23:27:37.488192 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:27:37.591036 sudo[2379]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:27:37.591642 sudo[2379]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:27:37.988607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:27:37.992845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:38.197321 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:27:38.210869 (dockerd)[2399]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:27:38.408979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:38.425002 (kubelet)[2409]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:27:38.505966 kubelet[2409]: E0706 23:27:38.505820 2409 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:27:38.515002 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:27:38.515527 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:27:38.516474 systemd[1]: kubelet.service: Consumed 327ms CPU time, 107M memory peak. Jul 6 23:27:38.658603 dockerd[2399]: time="2025-07-06T23:27:38.658493011Z" level=info msg="Starting up" Jul 6 23:27:38.660053 dockerd[2399]: time="2025-07-06T23:27:38.659930731Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 6 23:27:38.758223 dockerd[2399]: time="2025-07-06T23:27:38.758054672Z" level=info msg="Loading containers: start." Jul 6 23:27:38.773019 kernel: Initializing XFRM netlink socket Jul 6 23:27:39.101259 (udev-worker)[2432]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:27:39.179686 systemd-networkd[1820]: docker0: Link UP Jul 6 23:27:39.190627 dockerd[2399]: time="2025-07-06T23:27:39.190526982Z" level=info msg="Loading containers: done." Jul 6 23:27:39.218793 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3280951651-merged.mount: Deactivated successfully. Jul 6 23:27:39.222270 dockerd[2399]: time="2025-07-06T23:27:39.222154314Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:27:39.222466 dockerd[2399]: time="2025-07-06T23:27:39.222310542Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 6 23:27:39.222602 dockerd[2399]: time="2025-07-06T23:27:39.222540174Z" level=info msg="Initializing buildkit" Jul 6 23:27:39.276874 dockerd[2399]: time="2025-07-06T23:27:39.276798090Z" level=info msg="Completed buildkit initialization" Jul 6 23:27:39.293742 dockerd[2399]: time="2025-07-06T23:27:39.293641674Z" level=info msg="Daemon has completed initialization" Jul 6 23:27:39.294368 dockerd[2399]: time="2025-07-06T23:27:39.294136578Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:27:39.294261 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:27:40.395446 containerd[2034]: time="2025-07-06T23:27:40.395367776Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 6 23:27:41.030266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2009444222.mount: Deactivated successfully. Jul 6 23:27:42.345921 containerd[2034]: time="2025-07-06T23:27:42.345824049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:42.348692 containerd[2034]: time="2025-07-06T23:27:42.348599997Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328194" Jul 6 23:27:42.355772 containerd[2034]: time="2025-07-06T23:27:42.354975861Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:42.360733 containerd[2034]: time="2025-07-06T23:27:42.360657945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:42.362838 containerd[2034]: time="2025-07-06T23:27:42.362758437Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.967303217s" Jul 6 23:27:42.362838 containerd[2034]: time="2025-07-06T23:27:42.362832933Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 6 23:27:42.363868 containerd[2034]: time="2025-07-06T23:27:42.363823233Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 6 23:27:43.717686 containerd[2034]: time="2025-07-06T23:27:43.717597720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:43.719535 containerd[2034]: time="2025-07-06T23:27:43.719379216Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529228" Jul 6 23:27:43.720872 containerd[2034]: time="2025-07-06T23:27:43.720392172Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:43.727676 containerd[2034]: time="2025-07-06T23:27:43.727584240Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.363426951s" Jul 6 23:27:43.727676 containerd[2034]: time="2025-07-06T23:27:43.727652388Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 6 23:27:43.727872 containerd[2034]: time="2025-07-06T23:27:43.727791240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:43.728745 containerd[2034]: time="2025-07-06T23:27:43.728413008Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 6 23:27:44.849009 containerd[2034]: time="2025-07-06T23:27:44.848593598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:44.850718 containerd[2034]: time="2025-07-06T23:27:44.850647590Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484141" Jul 6 23:27:44.853139 containerd[2034]: time="2025-07-06T23:27:44.853064222Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:44.858466 containerd[2034]: time="2025-07-06T23:27:44.858359018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:44.860404 containerd[2034]: time="2025-07-06T23:27:44.860222870Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.131754254s" Jul 6 23:27:44.860404 containerd[2034]: time="2025-07-06T23:27:44.860277038Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 6 23:27:44.861998 containerd[2034]: time="2025-07-06T23:27:44.861917726Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 6 23:27:46.126856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1071087555.mount: Deactivated successfully. Jul 6 23:27:46.747984 containerd[2034]: time="2025-07-06T23:27:46.747292947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:46.750226 containerd[2034]: time="2025-07-06T23:27:46.750179619Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378406" Jul 6 23:27:46.752814 containerd[2034]: time="2025-07-06T23:27:46.752763015Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:46.757141 containerd[2034]: time="2025-07-06T23:27:46.757072599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:46.758674 containerd[2034]: time="2025-07-06T23:27:46.758623779Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.896620253s" Jul 6 23:27:46.758846 containerd[2034]: time="2025-07-06T23:27:46.758817135Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 6 23:27:46.759478 containerd[2034]: time="2025-07-06T23:27:46.759421095Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:27:47.313345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3501067307.mount: Deactivated successfully. Jul 6 23:27:48.663404 containerd[2034]: time="2025-07-06T23:27:48.663302153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:48.666440 containerd[2034]: time="2025-07-06T23:27:48.666348809Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 6 23:27:48.669254 containerd[2034]: time="2025-07-06T23:27:48.669158429Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:48.675170 containerd[2034]: time="2025-07-06T23:27:48.675080849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:48.677627 containerd[2034]: time="2025-07-06T23:27:48.677263337Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.917624454s" Jul 6 23:27:48.677627 containerd[2034]: time="2025-07-06T23:27:48.677325377Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 6 23:27:48.678109 containerd[2034]: time="2025-07-06T23:27:48.677987009Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:27:48.738493 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:27:48.742355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:49.125563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:49.139900 (kubelet)[2744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:27:49.231395 kubelet[2744]: E0706 23:27:49.231329 2744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:27:49.232803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount738438278.mount: Deactivated successfully. Jul 6 23:27:49.238360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:27:49.239912 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:27:49.241133 systemd[1]: kubelet.service: Consumed 333ms CPU time, 105.2M memory peak. Jul 6 23:27:49.248991 containerd[2034]: time="2025-07-06T23:27:49.248081224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:27:49.251283 containerd[2034]: time="2025-07-06T23:27:49.251227744Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 6 23:27:49.254001 containerd[2034]: time="2025-07-06T23:27:49.253906780Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:27:49.259149 containerd[2034]: time="2025-07-06T23:27:49.259057144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:27:49.261007 containerd[2034]: time="2025-07-06T23:27:49.260924464Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 582.874011ms" Jul 6 23:27:49.261234 containerd[2034]: time="2025-07-06T23:27:49.261197848Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:27:49.262228 containerd[2034]: time="2025-07-06T23:27:49.262045432Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 6 23:27:49.804286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2288973179.mount: Deactivated successfully. Jul 6 23:27:52.004826 containerd[2034]: time="2025-07-06T23:27:52.004723589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:52.008858 containerd[2034]: time="2025-07-06T23:27:52.008752577Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jul 6 23:27:52.012351 containerd[2034]: time="2025-07-06T23:27:52.012259301Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:52.022743 containerd[2034]: time="2025-07-06T23:27:52.022617617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:52.024963 containerd[2034]: time="2025-07-06T23:27:52.024888533Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.762321173s" Jul 6 23:27:52.025280 containerd[2034]: time="2025-07-06T23:27:52.025134749Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 6 23:27:52.493910 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 6 23:27:59.489187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:27:59.495303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:59.871252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:59.886658 (kubelet)[2840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:27:59.974359 kubelet[2840]: E0706 23:27:59.974268 2840 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:27:59.979370 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:27:59.979991 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:27:59.980663 systemd[1]: kubelet.service: Consumed 332ms CPU time, 107.2M memory peak. Jul 6 23:28:00.449812 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:00.450510 systemd[1]: kubelet.service: Consumed 332ms CPU time, 107.2M memory peak. Jul 6 23:28:00.455302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:00.511807 systemd[1]: Reload requested from client PID 2854 ('systemctl') (unit session-9.scope)... Jul 6 23:28:00.511841 systemd[1]: Reloading... Jul 6 23:28:00.780003 zram_generator::config[2898]: No configuration found. Jul 6 23:28:01.022036 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:28:01.315186 systemd[1]: Reloading finished in 801 ms. Jul 6 23:28:01.442122 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:28:01.442374 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:28:01.443138 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:01.443242 systemd[1]: kubelet.service: Consumed 257ms CPU time, 95M memory peak. Jul 6 23:28:01.447806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:01.807410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:01.827040 (kubelet)[2961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:28:01.908292 kubelet[2961]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:28:01.909528 kubelet[2961]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:28:01.909528 kubelet[2961]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:28:01.909765 kubelet[2961]: I0706 23:28:01.909522 2961 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:28:02.617711 kubelet[2961]: I0706 23:28:02.617631 2961 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:28:02.617711 kubelet[2961]: I0706 23:28:02.617691 2961 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:28:02.618618 kubelet[2961]: I0706 23:28:02.618547 2961 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:28:02.682592 kubelet[2961]: E0706 23:28:02.682524 2961 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.24.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.125:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:02.686717 kubelet[2961]: I0706 23:28:02.686643 2961 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:28:02.699505 kubelet[2961]: I0706 23:28:02.698997 2961 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:28:02.704933 kubelet[2961]: I0706 23:28:02.704874 2961 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:28:02.705457 kubelet[2961]: I0706 23:28:02.705400 2961 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:28:02.705762 kubelet[2961]: I0706 23:28:02.705457 2961 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-125","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:28:02.705955 kubelet[2961]: I0706 23:28:02.705901 2961 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:28:02.705955 kubelet[2961]: I0706 23:28:02.705929 2961 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:28:02.706331 kubelet[2961]: I0706 23:28:02.706301 2961 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:28:02.712768 kubelet[2961]: I0706 23:28:02.712596 2961 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:28:02.712768 kubelet[2961]: I0706 23:28:02.712646 2961 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:28:02.712768 kubelet[2961]: I0706 23:28:02.712689 2961 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:28:02.712768 kubelet[2961]: I0706 23:28:02.712709 2961 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:28:02.716598 kubelet[2961]: W0706 23:28:02.715792 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-125&limit=500&resourceVersion=0": dial tcp 172.31.24.125:6443: connect: connection refused Jul 6 23:28:02.716598 kubelet[2961]: E0706 23:28:02.715887 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-125&limit=500&resourceVersion=0\": dial tcp 172.31.24.125:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:02.717438 kubelet[2961]: W0706 23:28:02.717353 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.125:6443: connect: connection refused Jul 6 23:28:02.717573 kubelet[2961]: E0706 23:28:02.717484 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.125:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:02.718270 kubelet[2961]: I0706 23:28:02.718228 2961 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:28:02.719377 kubelet[2961]: I0706 23:28:02.719327 2961 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:28:02.719617 kubelet[2961]: W0706 23:28:02.719579 2961 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:28:02.722992 kubelet[2961]: I0706 23:28:02.722568 2961 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:28:02.722992 kubelet[2961]: I0706 23:28:02.722641 2961 server.go:1287] "Started kubelet" Jul 6 23:28:02.731069 kubelet[2961]: I0706 23:28:02.730986 2961 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:28:02.736055 kubelet[2961]: E0706 23:28:02.735511 2961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.125:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.125:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-125.184fcd467e68aa2f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-125,UID:ip-172-31-24-125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-125,},FirstTimestamp:2025-07-06 23:28:02.722605615 +0000 UTC m=+0.888209946,LastTimestamp:2025-07-06 23:28:02.722605615 +0000 UTC m=+0.888209946,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-125,}" Jul 6 23:28:02.740004 kubelet[2961]: I0706 23:28:02.739810 2961 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:28:02.740557 kubelet[2961]: I0706 23:28:02.740432 2961 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:28:02.741991 kubelet[2961]: I0706 23:28:02.741036 2961 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:28:02.741991 kubelet[2961]: I0706 23:28:02.741467 2961 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:28:02.745456 kubelet[2961]: I0706 23:28:02.745399 2961 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:28:02.749495 kubelet[2961]: I0706 23:28:02.749451 2961 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:28:02.750613 kubelet[2961]: E0706 23:28:02.750559 2961 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-125\" not found" Jul 6 23:28:02.754246 kubelet[2961]: I0706 23:28:02.754206 2961 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:28:02.754547 kubelet[2961]: I0706 23:28:02.754525 2961 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:28:02.756388 kubelet[2961]: W0706 23:28:02.756308 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.125:6443: connect: connection refused Jul 6 23:28:02.756880 kubelet[2961]: E0706 23:28:02.756842 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.125:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:02.758112 kubelet[2961]: E0706 23:28:02.757198 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-125?timeout=10s\": dial tcp 172.31.24.125:6443: connect: connection refused" interval="200ms" Jul 6 23:28:02.759497 kubelet[2961]: I0706 23:28:02.759380 2961 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:28:02.759723 kubelet[2961]: I0706 23:28:02.759692 2961 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:28:02.763818 kubelet[2961]: I0706 23:28:02.763715 2961 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:28:02.783624 kubelet[2961]: I0706 23:28:02.783542 2961 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:28:02.786617 kubelet[2961]: I0706 23:28:02.786570 2961 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:28:02.787482 kubelet[2961]: I0706 23:28:02.786826 2961 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:28:02.787482 kubelet[2961]: I0706 23:28:02.786874 2961 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:28:02.787482 kubelet[2961]: I0706 23:28:02.786892 2961 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:28:02.787482 kubelet[2961]: E0706 23:28:02.787055 2961 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:28:02.804859 kubelet[2961]: W0706 23:28:02.804761 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.125:6443: connect: connection refused Jul 6 23:28:02.805030 kubelet[2961]: E0706 23:28:02.804872 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.125:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:02.809827 kubelet[2961]: I0706 23:28:02.809769 2961 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:28:02.809827 kubelet[2961]: I0706 23:28:02.809829 2961 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:28:02.810150 kubelet[2961]: I0706 23:28:02.809861 2961 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:28:02.817296 kubelet[2961]: I0706 23:28:02.817202 2961 policy_none.go:49] "None policy: Start" Jul 6 23:28:02.817296 kubelet[2961]: I0706 23:28:02.817258 2961 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:28:02.817296 kubelet[2961]: I0706 23:28:02.817305 2961 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:28:02.831002 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:28:02.851506 kubelet[2961]: E0706 23:28:02.851450 2961 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-125\" not found" Jul 6 23:28:02.857921 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:28:02.865647 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:28:02.883618 kubelet[2961]: I0706 23:28:02.883480 2961 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:28:02.885925 kubelet[2961]: I0706 23:28:02.883805 2961 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:28:02.885925 kubelet[2961]: I0706 23:28:02.883838 2961 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:28:02.885925 kubelet[2961]: I0706 23:28:02.885368 2961 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:28:02.889740 kubelet[2961]: E0706 23:28:02.889682 2961 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:28:02.889984 kubelet[2961]: E0706 23:28:02.889921 2961 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-125\" not found" Jul 6 23:28:02.914789 systemd[1]: Created slice kubepods-burstable-podfe7ef0c5f6279b85a8e61a965c311b3d.slice - libcontainer container kubepods-burstable-podfe7ef0c5f6279b85a8e61a965c311b3d.slice. Jul 6 23:28:02.930447 kubelet[2961]: E0706 23:28:02.930050 2961 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-125\" not found" node="ip-172-31-24-125" Jul 6 23:28:02.937661 systemd[1]: Created slice kubepods-burstable-podede80012960fc721f1ef4d755df0891c.slice - libcontainer container kubepods-burstable-podede80012960fc721f1ef4d755df0891c.slice. Jul 6 23:28:02.942733 kubelet[2961]: E0706 23:28:02.942677 2961 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-125\" not found" node="ip-172-31-24-125" Jul 6 23:28:02.946201 systemd[1]: Created slice kubepods-burstable-podfe74708c157f042c7edcb368b67cc7c1.slice - libcontainer container kubepods-burstable-podfe74708c157f042c7edcb368b67cc7c1.slice. Jul 6 23:28:02.949892 kubelet[2961]: E0706 23:28:02.949842 2961 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-125\" not found" node="ip-172-31-24-125" Jul 6 23:28:02.955853 kubelet[2961]: I0706 23:28:02.955732 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe74708c157f042c7edcb368b67cc7c1-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-125\" (UID: \"fe74708c157f042c7edcb368b67cc7c1\") " pod="kube-system/kube-apiserver-ip-172-31-24-125" Jul 6 23:28:02.956120 kubelet[2961]: I0706 23:28:02.956038 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe74708c157f042c7edcb368b67cc7c1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-125\" (UID: \"fe74708c157f042c7edcb368b67cc7c1\") " pod="kube-system/kube-apiserver-ip-172-31-24-125" Jul 6 23:28:02.956232 kubelet[2961]: I0706 23:28:02.956208 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe7ef0c5f6279b85a8e61a965c311b3d-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-125\" (UID: \"fe7ef0c5f6279b85a8e61a965c311b3d\") " pod="kube-system/kube-controller-manager-ip-172-31-24-125" Jul 6 23:28:02.956459 kubelet[2961]: I0706 23:28:02.956332 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ede80012960fc721f1ef4d755df0891c-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-125\" (UID: \"ede80012960fc721f1ef4d755df0891c\") " pod="kube-system/kube-scheduler-ip-172-31-24-125" Jul 6 23:28:02.956561 kubelet[2961]: I0706 23:28:02.956370 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe74708c157f042c7edcb368b67cc7c1-ca-certs\") pod \"kube-apiserver-ip-172-31-24-125\" (UID: \"fe74708c157f042c7edcb368b67cc7c1\") " pod="kube-system/kube-apiserver-ip-172-31-24-125" Jul 6 23:28:02.959061 kubelet[2961]: E0706 23:28:02.958923 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-125?timeout=10s\": dial tcp 172.31.24.125:6443: connect: connection refused" interval="400ms" Jul 6 23:28:02.987096 kubelet[2961]: I0706 23:28:02.986342 2961 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-125" Jul 6 23:28:02.987096 kubelet[2961]: E0706 23:28:02.987043 2961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.125:6443/api/v1/nodes\": dial tcp 172.31.24.125:6443: connect: connection refused" node="ip-172-31-24-125" Jul 6 23:28:03.056971 kubelet[2961]: I0706 23:28:03.056893 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe7ef0c5f6279b85a8e61a965c311b3d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-125\" (UID: \"fe7ef0c5f6279b85a8e61a965c311b3d\") " pod="kube-system/kube-controller-manager-ip-172-31-24-125" Jul 6 23:28:03.057230 kubelet[2961]: I0706 23:28:03.057200 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe7ef0c5f6279b85a8e61a965c311b3d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-125\" (UID: \"fe7ef0c5f6279b85a8e61a965c311b3d\") " pod="kube-system/kube-controller-manager-ip-172-31-24-125" Jul 6 23:28:03.057410 kubelet[2961]: I0706 23:28:03.057380 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe7ef0c5f6279b85a8e61a965c311b3d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-125\" (UID: \"fe7ef0c5f6279b85a8e61a965c311b3d\") " pod="kube-system/kube-controller-manager-ip-172-31-24-125" Jul 6 23:28:03.057671 kubelet[2961]: I0706 23:28:03.057642 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe7ef0c5f6279b85a8e61a965c311b3d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-125\" (UID: \"fe7ef0c5f6279b85a8e61a965c311b3d\") " pod="kube-system/kube-controller-manager-ip-172-31-24-125" Jul 6 23:28:03.190066 kubelet[2961]: I0706 23:28:03.189902 2961 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-125" Jul 6 23:28:03.190798 kubelet[2961]: E0706 23:28:03.190726 2961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.125:6443/api/v1/nodes\": dial tcp 172.31.24.125:6443: connect: connection refused" node="ip-172-31-24-125" Jul 6 23:28:03.232860 containerd[2034]: time="2025-07-06T23:28:03.232789949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-125,Uid:fe7ef0c5f6279b85a8e61a965c311b3d,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:03.244998 containerd[2034]: time="2025-07-06T23:28:03.244711601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-125,Uid:ede80012960fc721f1ef4d755df0891c,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:03.252362 containerd[2034]: time="2025-07-06T23:28:03.252261665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-125,Uid:fe74708c157f042c7edcb368b67cc7c1,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:03.300343 containerd[2034]: time="2025-07-06T23:28:03.300191597Z" level=info msg="connecting to shim 8fe8cc33e323e120b7b57ba689128e20c3b79c3e9079c9476211fac62b5baf85" address="unix:///run/containerd/s/454d054bbd101e4ec6c10a231cb8fc3ae52e7115b82ae50865805da579ae7c5b" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:03.339008 containerd[2034]: time="2025-07-06T23:28:03.338287170Z" level=info msg="connecting to shim 06636feeeb843149863b740ec67c2b33f91111e5c9f9533f90bcd68961d40615" address="unix:///run/containerd/s/50b5c199e8c85f1cc393ed90befe7a56ee8cb031c9d11b23acefee61b686afee" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:03.359913 kubelet[2961]: E0706 23:28:03.359852 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-125?timeout=10s\": dial tcp 172.31.24.125:6443: connect: connection refused" interval="800ms" Jul 6 23:28:03.381115 containerd[2034]: time="2025-07-06T23:28:03.380727282Z" level=info msg="connecting to shim 62a7977656a9e1c4d2870e037d09f56485f9a570c23de9701201146222031281" address="unix:///run/containerd/s/50c5fd556e3398ccaca07e3b6b693e0f9f32365a743bffe946b6cac056b41094" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:03.399419 systemd[1]: Started cri-containerd-8fe8cc33e323e120b7b57ba689128e20c3b79c3e9079c9476211fac62b5baf85.scope - libcontainer container 8fe8cc33e323e120b7b57ba689128e20c3b79c3e9079c9476211fac62b5baf85. Jul 6 23:28:03.435277 systemd[1]: Started cri-containerd-06636feeeb843149863b740ec67c2b33f91111e5c9f9533f90bcd68961d40615.scope - libcontainer container 06636feeeb843149863b740ec67c2b33f91111e5c9f9533f90bcd68961d40615. Jul 6 23:28:03.461628 systemd[1]: Started cri-containerd-62a7977656a9e1c4d2870e037d09f56485f9a570c23de9701201146222031281.scope - libcontainer container 62a7977656a9e1c4d2870e037d09f56485f9a570c23de9701201146222031281. Jul 6 23:28:03.523539 kubelet[2961]: W0706 23:28:03.523460 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.125:6443: connect: connection refused Jul 6 23:28:03.524885 kubelet[2961]: E0706 23:28:03.524807 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.125:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:03.555769 containerd[2034]: time="2025-07-06T23:28:03.555666679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-125,Uid:fe7ef0c5f6279b85a8e61a965c311b3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fe8cc33e323e120b7b57ba689128e20c3b79c3e9079c9476211fac62b5baf85\"" Jul 6 23:28:03.588654 containerd[2034]: time="2025-07-06T23:28:03.588041191Z" level=info msg="CreateContainer within sandbox \"8fe8cc33e323e120b7b57ba689128e20c3b79c3e9079c9476211fac62b5baf85\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:28:03.601736 kubelet[2961]: W0706 23:28:03.601633 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.125:6443: connect: connection refused Jul 6 23:28:03.601931 kubelet[2961]: E0706 23:28:03.601755 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.125:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:03.604465 kubelet[2961]: I0706 23:28:03.604063 2961 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-125" Jul 6 23:28:03.606377 kubelet[2961]: E0706 23:28:03.606264 2961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.125:6443/api/v1/nodes\": dial tcp 172.31.24.125:6443: connect: connection refused" node="ip-172-31-24-125" Jul 6 23:28:03.625715 kubelet[2961]: W0706 23:28:03.625225 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.125:6443: connect: connection refused Jul 6 23:28:03.625715 kubelet[2961]: E0706 23:28:03.625411 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.125:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:03.631933 containerd[2034]: time="2025-07-06T23:28:03.631662163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-125,Uid:ede80012960fc721f1ef4d755df0891c,Namespace:kube-system,Attempt:0,} returns sandbox id \"06636feeeb843149863b740ec67c2b33f91111e5c9f9533f90bcd68961d40615\"" Jul 6 23:28:03.638726 containerd[2034]: time="2025-07-06T23:28:03.638528023Z" level=info msg="Container 4e1a3989ae7d03f26fdc9d19a51aa7de8373634fdfecd0b3dfe4e394b4fc63a3: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:03.640519 containerd[2034]: time="2025-07-06T23:28:03.640419823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-125,Uid:fe74708c157f042c7edcb368b67cc7c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"62a7977656a9e1c4d2870e037d09f56485f9a570c23de9701201146222031281\"" Jul 6 23:28:03.641478 containerd[2034]: time="2025-07-06T23:28:03.641432143Z" level=info msg="CreateContainer within sandbox \"06636feeeb843149863b740ec67c2b33f91111e5c9f9533f90bcd68961d40615\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:28:03.647269 containerd[2034]: time="2025-07-06T23:28:03.647178847Z" level=info msg="CreateContainer within sandbox \"62a7977656a9e1c4d2870e037d09f56485f9a570c23de9701201146222031281\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:28:03.655996 containerd[2034]: time="2025-07-06T23:28:03.655636831Z" level=info msg="CreateContainer within sandbox \"8fe8cc33e323e120b7b57ba689128e20c3b79c3e9079c9476211fac62b5baf85\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4e1a3989ae7d03f26fdc9d19a51aa7de8373634fdfecd0b3dfe4e394b4fc63a3\"" Jul 6 23:28:03.656932 containerd[2034]: time="2025-07-06T23:28:03.656869099Z" level=info msg="StartContainer for \"4e1a3989ae7d03f26fdc9d19a51aa7de8373634fdfecd0b3dfe4e394b4fc63a3\"" Jul 6 23:28:03.659106 containerd[2034]: time="2025-07-06T23:28:03.659042131Z" level=info msg="connecting to shim 4e1a3989ae7d03f26fdc9d19a51aa7de8373634fdfecd0b3dfe4e394b4fc63a3" address="unix:///run/containerd/s/454d054bbd101e4ec6c10a231cb8fc3ae52e7115b82ae50865805da579ae7c5b" protocol=ttrpc version=3 Jul 6 23:28:03.671975 containerd[2034]: time="2025-07-06T23:28:03.670405183Z" level=info msg="Container bfa8716d801d495b30964a07c6032a55910adedf08180099cf7ebcf5635a64e9: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:03.681610 containerd[2034]: time="2025-07-06T23:28:03.681537295Z" level=info msg="Container dcb820ceead1be5cdf6165be0e5fb38c4dc7bc5dca767d8ac33d16822422ca75: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:03.694134 containerd[2034]: time="2025-07-06T23:28:03.693973051Z" level=info msg="CreateContainer within sandbox \"06636feeeb843149863b740ec67c2b33f91111e5c9f9533f90bcd68961d40615\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bfa8716d801d495b30964a07c6032a55910adedf08180099cf7ebcf5635a64e9\"" Jul 6 23:28:03.695064 containerd[2034]: time="2025-07-06T23:28:03.694851295Z" level=info msg="StartContainer for \"bfa8716d801d495b30964a07c6032a55910adedf08180099cf7ebcf5635a64e9\"" Jul 6 23:28:03.699734 containerd[2034]: time="2025-07-06T23:28:03.699656503Z" level=info msg="connecting to shim bfa8716d801d495b30964a07c6032a55910adedf08180099cf7ebcf5635a64e9" address="unix:///run/containerd/s/50b5c199e8c85f1cc393ed90befe7a56ee8cb031c9d11b23acefee61b686afee" protocol=ttrpc version=3 Jul 6 23:28:03.703352 systemd[1]: Started cri-containerd-4e1a3989ae7d03f26fdc9d19a51aa7de8373634fdfecd0b3dfe4e394b4fc63a3.scope - libcontainer container 4e1a3989ae7d03f26fdc9d19a51aa7de8373634fdfecd0b3dfe4e394b4fc63a3. Jul 6 23:28:03.705314 containerd[2034]: time="2025-07-06T23:28:03.704482819Z" level=info msg="CreateContainer within sandbox \"62a7977656a9e1c4d2870e037d09f56485f9a570c23de9701201146222031281\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dcb820ceead1be5cdf6165be0e5fb38c4dc7bc5dca767d8ac33d16822422ca75\"" Jul 6 23:28:03.706688 containerd[2034]: time="2025-07-06T23:28:03.706638596Z" level=info msg="StartContainer for \"dcb820ceead1be5cdf6165be0e5fb38c4dc7bc5dca767d8ac33d16822422ca75\"" Jul 6 23:28:03.714424 containerd[2034]: time="2025-07-06T23:28:03.713044040Z" level=info msg="connecting to shim dcb820ceead1be5cdf6165be0e5fb38c4dc7bc5dca767d8ac33d16822422ca75" address="unix:///run/containerd/s/50c5fd556e3398ccaca07e3b6b693e0f9f32365a743bffe946b6cac056b41094" protocol=ttrpc version=3 Jul 6 23:28:03.768116 systemd[1]: Started cri-containerd-bfa8716d801d495b30964a07c6032a55910adedf08180099cf7ebcf5635a64e9.scope - libcontainer container bfa8716d801d495b30964a07c6032a55910adedf08180099cf7ebcf5635a64e9. Jul 6 23:28:03.784325 systemd[1]: Started cri-containerd-dcb820ceead1be5cdf6165be0e5fb38c4dc7bc5dca767d8ac33d16822422ca75.scope - libcontainer container dcb820ceead1be5cdf6165be0e5fb38c4dc7bc5dca767d8ac33d16822422ca75. Jul 6 23:28:03.887192 kubelet[2961]: W0706 23:28:03.886758 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-125&limit=500&resourceVersion=0": dial tcp 172.31.24.125:6443: connect: connection refused Jul 6 23:28:03.888560 kubelet[2961]: E0706 23:28:03.888487 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-125&limit=500&resourceVersion=0\": dial tcp 172.31.24.125:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:03.913492 containerd[2034]: time="2025-07-06T23:28:03.913363077Z" level=info msg="StartContainer for \"4e1a3989ae7d03f26fdc9d19a51aa7de8373634fdfecd0b3dfe4e394b4fc63a3\" returns successfully" Jul 6 23:28:03.968765 containerd[2034]: time="2025-07-06T23:28:03.968191413Z" level=info msg="StartContainer for \"bfa8716d801d495b30964a07c6032a55910adedf08180099cf7ebcf5635a64e9\" returns successfully" Jul 6 23:28:03.996728 containerd[2034]: time="2025-07-06T23:28:03.996650133Z" level=info msg="StartContainer for \"dcb820ceead1be5cdf6165be0e5fb38c4dc7bc5dca767d8ac33d16822422ca75\" returns successfully" Jul 6 23:28:04.161607 kubelet[2961]: E0706 23:28:04.161513 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-125?timeout=10s\": dial tcp 172.31.24.125:6443: connect: connection refused" interval="1.6s" Jul 6 23:28:04.410394 kubelet[2961]: I0706 23:28:04.409714 2961 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-125" Jul 6 23:28:04.862985 kubelet[2961]: E0706 23:28:04.861660 2961 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-125\" not found" node="ip-172-31-24-125" Jul 6 23:28:04.870117 kubelet[2961]: E0706 23:28:04.870077 2961 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-125\" not found" node="ip-172-31-24-125" Jul 6 23:28:04.875532 kubelet[2961]: E0706 23:28:04.875487 2961 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-125\" not found" node="ip-172-31-24-125" Jul 6 23:28:05.879105 kubelet[2961]: E0706 23:28:05.879064 2961 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-125\" not found" node="ip-172-31-24-125" Jul 6 23:28:05.881914 kubelet[2961]: E0706 23:28:05.881860 2961 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-125\" not found" node="ip-172-31-24-125" Jul 6 23:28:05.883557 kubelet[2961]: E0706 23:28:05.883061 2961 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-125\" not found" node="ip-172-31-24-125" Jul 6 23:28:06.338088 update_engine[2003]: I20250706 23:28:06.337989 2003 update_attempter.cc:509] Updating boot flags... Jul 6 23:28:06.891972 kubelet[2961]: E0706 23:28:06.891902 2961 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-125\" not found" node="ip-172-31-24-125" Jul 6 23:28:06.897466 kubelet[2961]: E0706 23:28:06.893429 2961 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-125\" not found" node="ip-172-31-24-125" Jul 6 23:28:08.318592 kubelet[2961]: E0706 23:28:08.318531 2961 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-125\" not found" node="ip-172-31-24-125" Jul 6 23:28:08.369479 kubelet[2961]: E0706 23:28:08.369066 2961 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-24-125.184fcd467e68aa2f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-125,UID:ip-172-31-24-125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-125,},FirstTimestamp:2025-07-06 23:28:02.722605615 +0000 UTC m=+0.888209946,LastTimestamp:2025-07-06 23:28:02.722605615 +0000 UTC m=+0.888209946,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-125,}" Jul 6 23:28:08.403147 kubelet[2961]: I0706 23:28:08.403098 2961 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-125" Jul 6 23:28:08.437828 kubelet[2961]: E0706 23:28:08.437463 2961 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-24-125.184fcd46837699b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-125,UID:ip-172-31-24-125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-24-125 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-24-125,},FirstTimestamp:2025-07-06 23:28:02.807404983 +0000 UTC m=+0.973009302,LastTimestamp:2025-07-06 23:28:02.807404983 +0000 UTC m=+0.973009302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-125,}" Jul 6 23:28:08.452787 kubelet[2961]: I0706 23:28:08.452742 2961 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-125" Jul 6 23:28:08.470654 kubelet[2961]: E0706 23:28:08.470611 2961 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-125\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-125" Jul 6 23:28:08.470858 kubelet[2961]: I0706 23:28:08.470835 2961 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-125" Jul 6 23:28:08.476475 kubelet[2961]: E0706 23:28:08.476414 2961 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-125\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-125" Jul 6 23:28:08.476475 kubelet[2961]: I0706 23:28:08.476469 2961 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-125" Jul 6 23:28:08.483741 kubelet[2961]: E0706 23:28:08.483686 2961 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-125\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-125" Jul 6 23:28:08.722145 kubelet[2961]: I0706 23:28:08.721472 2961 apiserver.go:52] "Watching apiserver" Jul 6 23:28:08.754797 kubelet[2961]: I0706 23:28:08.754750 2961 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:28:10.310537 systemd[1]: Reload requested from client PID 3418 ('systemctl') (unit session-9.scope)... Jul 6 23:28:10.310569 systemd[1]: Reloading... Jul 6 23:28:10.497010 zram_generator::config[3462]: No configuration found. Jul 6 23:28:10.712707 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:28:11.010678 systemd[1]: Reloading finished in 699 ms. Jul 6 23:28:11.069593 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:11.091694 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:28:11.092287 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:11.092391 systemd[1]: kubelet.service: Consumed 1.756s CPU time, 127.1M memory peak. Jul 6 23:28:11.096972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:11.459551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:11.480664 (kubelet)[3522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:28:11.569445 kubelet[3522]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:28:11.569445 kubelet[3522]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:28:11.569445 kubelet[3522]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:28:11.570117 kubelet[3522]: I0706 23:28:11.569424 3522 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:28:11.585862 kubelet[3522]: I0706 23:28:11.585783 3522 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:28:11.585862 kubelet[3522]: I0706 23:28:11.585842 3522 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:28:11.589008 kubelet[3522]: I0706 23:28:11.588795 3522 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:28:11.596751 kubelet[3522]: I0706 23:28:11.596037 3522 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:28:11.602327 kubelet[3522]: I0706 23:28:11.601758 3522 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:28:11.619102 kubelet[3522]: I0706 23:28:11.619044 3522 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:28:11.628755 kubelet[3522]: I0706 23:28:11.628696 3522 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:28:11.629277 kubelet[3522]: I0706 23:28:11.629207 3522 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:28:11.629566 kubelet[3522]: I0706 23:28:11.629276 3522 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-125","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:28:11.631311 kubelet[3522]: I0706 23:28:11.629581 3522 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:28:11.631311 kubelet[3522]: I0706 23:28:11.629601 3522 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:28:11.631311 kubelet[3522]: I0706 23:28:11.629673 3522 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:28:11.631311 kubelet[3522]: I0706 23:28:11.629901 3522 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:28:11.631311 kubelet[3522]: I0706 23:28:11.629924 3522 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:28:11.631311 kubelet[3522]: I0706 23:28:11.629982 3522 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:28:11.631311 kubelet[3522]: I0706 23:28:11.630004 3522 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:28:11.636502 kubelet[3522]: I0706 23:28:11.636467 3522 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:28:11.638377 kubelet[3522]: I0706 23:28:11.637472 3522 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:28:11.642796 kubelet[3522]: I0706 23:28:11.641259 3522 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:28:11.642796 kubelet[3522]: I0706 23:28:11.641320 3522 server.go:1287] "Started kubelet" Jul 6 23:28:11.644986 kubelet[3522]: I0706 23:28:11.643792 3522 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:28:11.644986 kubelet[3522]: I0706 23:28:11.644311 3522 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:28:11.644986 kubelet[3522]: I0706 23:28:11.644401 3522 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:28:11.647599 kubelet[3522]: I0706 23:28:11.646779 3522 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:28:11.658992 kubelet[3522]: I0706 23:28:11.658778 3522 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:28:11.677443 kubelet[3522]: I0706 23:28:11.677397 3522 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:28:11.682157 kubelet[3522]: I0706 23:28:11.682076 3522 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:28:11.682671 kubelet[3522]: E0706 23:28:11.682501 3522 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-125\" not found" Jul 6 23:28:11.685038 kubelet[3522]: I0706 23:28:11.685002 3522 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:28:11.691977 kubelet[3522]: I0706 23:28:11.685587 3522 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:28:11.736248 kubelet[3522]: I0706 23:28:11.736087 3522 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:28:11.741041 kubelet[3522]: I0706 23:28:11.740981 3522 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:28:11.751158 kubelet[3522]: I0706 23:28:11.750715 3522 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:28:11.754017 kubelet[3522]: I0706 23:28:11.753090 3522 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:28:11.755880 kubelet[3522]: I0706 23:28:11.755774 3522 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:28:11.755880 kubelet[3522]: I0706 23:28:11.755828 3522 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:28:11.755880 kubelet[3522]: I0706 23:28:11.755862 3522 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:28:11.755880 kubelet[3522]: I0706 23:28:11.755877 3522 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:28:11.756204 kubelet[3522]: E0706 23:28:11.755975 3522 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:28:11.769080 kubelet[3522]: E0706 23:28:11.769015 3522 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:28:11.856069 kubelet[3522]: E0706 23:28:11.856030 3522 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:28:11.874351 kubelet[3522]: I0706 23:28:11.874301 3522 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:28:11.874351 kubelet[3522]: I0706 23:28:11.874338 3522 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:28:11.874570 kubelet[3522]: I0706 23:28:11.874377 3522 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:28:11.875633 kubelet[3522]: I0706 23:28:11.874761 3522 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:28:11.875633 kubelet[3522]: I0706 23:28:11.874796 3522 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:28:11.875633 kubelet[3522]: I0706 23:28:11.874882 3522 policy_none.go:49] "None policy: Start" Jul 6 23:28:11.875633 kubelet[3522]: I0706 23:28:11.874913 3522 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:28:11.875633 kubelet[3522]: I0706 23:28:11.874955 3522 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:28:11.875633 kubelet[3522]: I0706 23:28:11.875210 3522 state_mem.go:75] "Updated machine memory state" Jul 6 23:28:11.886986 kubelet[3522]: I0706 23:28:11.886680 3522 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:28:11.887423 kubelet[3522]: I0706 23:28:11.887383 3522 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:28:11.887498 kubelet[3522]: I0706 23:28:11.887418 3522 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:28:11.889487 kubelet[3522]: I0706 23:28:11.888514 3522 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:28:11.895236 kubelet[3522]: E0706 23:28:11.895158 3522 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:28:12.026008 kubelet[3522]: I0706 23:28:12.025493 3522 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-125" Jul 6 23:28:12.047003 kubelet[3522]: I0706 23:28:12.046928 3522 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-125" Jul 6 23:28:12.047725 kubelet[3522]: I0706 23:28:12.047073 3522 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-125" Jul 6 23:28:12.057544 kubelet[3522]: I0706 23:28:12.057478 3522 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-125" Jul 6 23:28:12.058141 kubelet[3522]: I0706 23:28:12.058095 3522 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-125" Jul 6 23:28:12.058326 kubelet[3522]: I0706 23:28:12.058253 3522 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-125" Jul 6 23:28:12.094878 kubelet[3522]: I0706 23:28:12.094768 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ede80012960fc721f1ef4d755df0891c-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-125\" (UID: \"ede80012960fc721f1ef4d755df0891c\") " pod="kube-system/kube-scheduler-ip-172-31-24-125" Jul 6 23:28:12.095245 kubelet[3522]: I0706 23:28:12.095207 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe74708c157f042c7edcb368b67cc7c1-ca-certs\") pod \"kube-apiserver-ip-172-31-24-125\" (UID: \"fe74708c157f042c7edcb368b67cc7c1\") " pod="kube-system/kube-apiserver-ip-172-31-24-125" Jul 6 23:28:12.095492 kubelet[3522]: I0706 23:28:12.095399 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe74708c157f042c7edcb368b67cc7c1-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-125\" (UID: \"fe74708c157f042c7edcb368b67cc7c1\") " pod="kube-system/kube-apiserver-ip-172-31-24-125" Jul 6 23:28:12.095661 kubelet[3522]: I0706 23:28:12.095562 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe7ef0c5f6279b85a8e61a965c311b3d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-125\" (UID: \"fe7ef0c5f6279b85a8e61a965c311b3d\") " pod="kube-system/kube-controller-manager-ip-172-31-24-125" Jul 6 23:28:12.095787 kubelet[3522]: I0706 23:28:12.095763 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe7ef0c5f6279b85a8e61a965c311b3d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-125\" (UID: \"fe7ef0c5f6279b85a8e61a965c311b3d\") " pod="kube-system/kube-controller-manager-ip-172-31-24-125" Jul 6 23:28:12.096009 kubelet[3522]: I0706 23:28:12.095983 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe7ef0c5f6279b85a8e61a965c311b3d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-125\" (UID: \"fe7ef0c5f6279b85a8e61a965c311b3d\") " pod="kube-system/kube-controller-manager-ip-172-31-24-125" Jul 6 23:28:12.096252 kubelet[3522]: I0706 23:28:12.096204 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe7ef0c5f6279b85a8e61a965c311b3d-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-125\" (UID: \"fe7ef0c5f6279b85a8e61a965c311b3d\") " pod="kube-system/kube-controller-manager-ip-172-31-24-125" Jul 6 23:28:12.096479 kubelet[3522]: I0706 23:28:12.096423 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe7ef0c5f6279b85a8e61a965c311b3d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-125\" (UID: \"fe7ef0c5f6279b85a8e61a965c311b3d\") " pod="kube-system/kube-controller-manager-ip-172-31-24-125" Jul 6 23:28:12.096692 kubelet[3522]: I0706 23:28:12.096610 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe74708c157f042c7edcb368b67cc7c1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-125\" (UID: \"fe74708c157f042c7edcb368b67cc7c1\") " pod="kube-system/kube-apiserver-ip-172-31-24-125" Jul 6 23:28:12.639483 kubelet[3522]: I0706 23:28:12.639417 3522 apiserver.go:52] "Watching apiserver" Jul 6 23:28:12.692688 kubelet[3522]: I0706 23:28:12.692439 3522 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:28:12.786311 kubelet[3522]: I0706 23:28:12.786081 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-125" podStartSLOduration=0.786058361 podStartE2EDuration="786.058361ms" podCreationTimestamp="2025-07-06 23:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:12.767666129 +0000 UTC m=+1.276980344" watchObservedRunningTime="2025-07-06 23:28:12.786058361 +0000 UTC m=+1.295372096" Jul 6 23:28:12.787069 kubelet[3522]: I0706 23:28:12.786965 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-125" podStartSLOduration=0.786924953 podStartE2EDuration="786.924953ms" podCreationTimestamp="2025-07-06 23:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:12.785286041 +0000 UTC m=+1.294599740" watchObservedRunningTime="2025-07-06 23:28:12.786924953 +0000 UTC m=+1.296238640" Jul 6 23:28:12.809480 kubelet[3522]: I0706 23:28:12.809102 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-125" podStartSLOduration=0.809084105 podStartE2EDuration="809.084105ms" podCreationTimestamp="2025-07-06 23:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:12.808328897 +0000 UTC m=+1.317642680" watchObservedRunningTime="2025-07-06 23:28:12.809084105 +0000 UTC m=+1.318397792" Jul 6 23:28:16.506985 kubelet[3522]: I0706 23:28:16.506821 3522 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:28:16.507915 containerd[2034]: time="2025-07-06T23:28:16.507847315Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:28:16.508719 kubelet[3522]: I0706 23:28:16.508682 3522 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:28:17.634619 systemd[1]: Created slice kubepods-besteffort-pod2e0e8d74_9895_4e53_a28b_fbf783d93949.slice - libcontainer container kubepods-besteffort-pod2e0e8d74_9895_4e53_a28b_fbf783d93949.slice. Jul 6 23:28:17.637112 kubelet[3522]: I0706 23:28:17.636481 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e0e8d74-9895-4e53-a28b-fbf783d93949-xtables-lock\") pod \"kube-proxy-45vqv\" (UID: \"2e0e8d74-9895-4e53-a28b-fbf783d93949\") " pod="kube-system/kube-proxy-45vqv" Jul 6 23:28:17.637112 kubelet[3522]: I0706 23:28:17.636544 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sncfp\" (UniqueName: \"kubernetes.io/projected/2e0e8d74-9895-4e53-a28b-fbf783d93949-kube-api-access-sncfp\") pod \"kube-proxy-45vqv\" (UID: \"2e0e8d74-9895-4e53-a28b-fbf783d93949\") " pod="kube-system/kube-proxy-45vqv" Jul 6 23:28:17.637112 kubelet[3522]: I0706 23:28:17.636590 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e0e8d74-9895-4e53-a28b-fbf783d93949-kube-proxy\") pod \"kube-proxy-45vqv\" (UID: \"2e0e8d74-9895-4e53-a28b-fbf783d93949\") " pod="kube-system/kube-proxy-45vqv" Jul 6 23:28:17.638931 kubelet[3522]: I0706 23:28:17.636626 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e0e8d74-9895-4e53-a28b-fbf783d93949-lib-modules\") pod \"kube-proxy-45vqv\" (UID: \"2e0e8d74-9895-4e53-a28b-fbf783d93949\") " pod="kube-system/kube-proxy-45vqv" Jul 6 23:28:17.831011 systemd[1]: Created slice kubepods-besteffort-pod96dd351f_970b_47b6_8968_17d3f4978722.slice - libcontainer container kubepods-besteffort-pod96dd351f_970b_47b6_8968_17d3f4978722.slice. Jul 6 23:28:17.839314 kubelet[3522]: I0706 23:28:17.839258 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/96dd351f-970b-47b6-8968-17d3f4978722-var-lib-calico\") pod \"tigera-operator-747864d56d-k2vfs\" (UID: \"96dd351f-970b-47b6-8968-17d3f4978722\") " pod="tigera-operator/tigera-operator-747864d56d-k2vfs" Jul 6 23:28:17.839440 kubelet[3522]: I0706 23:28:17.839327 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z56xl\" (UniqueName: \"kubernetes.io/projected/96dd351f-970b-47b6-8968-17d3f4978722-kube-api-access-z56xl\") pod \"tigera-operator-747864d56d-k2vfs\" (UID: \"96dd351f-970b-47b6-8968-17d3f4978722\") " pod="tigera-operator/tigera-operator-747864d56d-k2vfs" Jul 6 23:28:17.948516 containerd[2034]: time="2025-07-06T23:28:17.948353098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-45vqv,Uid:2e0e8d74-9895-4e53-a28b-fbf783d93949,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:17.999710 containerd[2034]: time="2025-07-06T23:28:17.998779690Z" level=info msg="connecting to shim 7542749f6f68c1190b35e53293360e762912a81bb48e13098a2f48b5354bcba3" address="unix:///run/containerd/s/6064f61db4ae4b53539d5d4f7aa1da352d51a16f6d71ed2cbf85a773bbc71a81" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:18.047241 systemd[1]: Started cri-containerd-7542749f6f68c1190b35e53293360e762912a81bb48e13098a2f48b5354bcba3.scope - libcontainer container 7542749f6f68c1190b35e53293360e762912a81bb48e13098a2f48b5354bcba3. Jul 6 23:28:18.096563 containerd[2034]: time="2025-07-06T23:28:18.096507859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-45vqv,Uid:2e0e8d74-9895-4e53-a28b-fbf783d93949,Namespace:kube-system,Attempt:0,} returns sandbox id \"7542749f6f68c1190b35e53293360e762912a81bb48e13098a2f48b5354bcba3\"" Jul 6 23:28:18.104379 containerd[2034]: time="2025-07-06T23:28:18.104316943Z" level=info msg="CreateContainer within sandbox \"7542749f6f68c1190b35e53293360e762912a81bb48e13098a2f48b5354bcba3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:28:18.128121 containerd[2034]: time="2025-07-06T23:28:18.128040511Z" level=info msg="Container d0feaadeb04e0474f31ddd85eb3a45b9ddd31db0ee7de0b38253b911dfa23744: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:18.139208 containerd[2034]: time="2025-07-06T23:28:18.139127155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-k2vfs,Uid:96dd351f-970b-47b6-8968-17d3f4978722,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:28:18.144810 containerd[2034]: time="2025-07-06T23:28:18.144752131Z" level=info msg="CreateContainer within sandbox \"7542749f6f68c1190b35e53293360e762912a81bb48e13098a2f48b5354bcba3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d0feaadeb04e0474f31ddd85eb3a45b9ddd31db0ee7de0b38253b911dfa23744\"" Jul 6 23:28:18.146231 containerd[2034]: time="2025-07-06T23:28:18.146180155Z" level=info msg="StartContainer for \"d0feaadeb04e0474f31ddd85eb3a45b9ddd31db0ee7de0b38253b911dfa23744\"" Jul 6 23:28:18.149618 containerd[2034]: time="2025-07-06T23:28:18.149564335Z" level=info msg="connecting to shim d0feaadeb04e0474f31ddd85eb3a45b9ddd31db0ee7de0b38253b911dfa23744" address="unix:///run/containerd/s/6064f61db4ae4b53539d5d4f7aa1da352d51a16f6d71ed2cbf85a773bbc71a81" protocol=ttrpc version=3 Jul 6 23:28:18.199384 systemd[1]: Started cri-containerd-d0feaadeb04e0474f31ddd85eb3a45b9ddd31db0ee7de0b38253b911dfa23744.scope - libcontainer container d0feaadeb04e0474f31ddd85eb3a45b9ddd31db0ee7de0b38253b911dfa23744. Jul 6 23:28:18.216128 containerd[2034]: time="2025-07-06T23:28:18.215900972Z" level=info msg="connecting to shim 1fa96f16519ed729e9deef0977a369ef5189bd8d6cb6933fd7aad2b2ebc7bebf" address="unix:///run/containerd/s/54ede778b0e2970155189693bff55713e36304b200519eec5b99ca68c7f95958" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:18.268236 systemd[1]: Started cri-containerd-1fa96f16519ed729e9deef0977a369ef5189bd8d6cb6933fd7aad2b2ebc7bebf.scope - libcontainer container 1fa96f16519ed729e9deef0977a369ef5189bd8d6cb6933fd7aad2b2ebc7bebf. Jul 6 23:28:18.332835 containerd[2034]: time="2025-07-06T23:28:18.332773256Z" level=info msg="StartContainer for \"d0feaadeb04e0474f31ddd85eb3a45b9ddd31db0ee7de0b38253b911dfa23744\" returns successfully" Jul 6 23:28:18.376760 containerd[2034]: time="2025-07-06T23:28:18.376592444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-k2vfs,Uid:96dd351f-970b-47b6-8968-17d3f4978722,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1fa96f16519ed729e9deef0977a369ef5189bd8d6cb6933fd7aad2b2ebc7bebf\"" Jul 6 23:28:18.381869 containerd[2034]: time="2025-07-06T23:28:18.381769004Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:28:18.876009 kubelet[3522]: I0706 23:28:18.875864 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-45vqv" podStartSLOduration=1.875842499 podStartE2EDuration="1.875842499s" podCreationTimestamp="2025-07-06 23:28:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:18.875412959 +0000 UTC m=+7.384726658" watchObservedRunningTime="2025-07-06 23:28:18.875842499 +0000 UTC m=+7.385156174" Jul 6 23:28:19.630123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1113446066.mount: Deactivated successfully. Jul 6 23:28:20.396226 containerd[2034]: time="2025-07-06T23:28:20.396088762Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:20.398985 containerd[2034]: time="2025-07-06T23:28:20.398905138Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 6 23:28:20.403522 containerd[2034]: time="2025-07-06T23:28:20.403411102Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:20.407507 containerd[2034]: time="2025-07-06T23:28:20.407426098Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:20.409071 containerd[2034]: time="2025-07-06T23:28:20.408843082Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.026971334s" Jul 6 23:28:20.409071 containerd[2034]: time="2025-07-06T23:28:20.408900430Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 6 23:28:20.415962 containerd[2034]: time="2025-07-06T23:28:20.414554434Z" level=info msg="CreateContainer within sandbox \"1fa96f16519ed729e9deef0977a369ef5189bd8d6cb6933fd7aad2b2ebc7bebf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:28:20.448981 containerd[2034]: time="2025-07-06T23:28:20.445877831Z" level=info msg="Container 44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:20.455591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3636310995.mount: Deactivated successfully. Jul 6 23:28:20.461459 containerd[2034]: time="2025-07-06T23:28:20.461397947Z" level=info msg="CreateContainer within sandbox \"1fa96f16519ed729e9deef0977a369ef5189bd8d6cb6933fd7aad2b2ebc7bebf\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc\"" Jul 6 23:28:20.462358 containerd[2034]: time="2025-07-06T23:28:20.462277379Z" level=info msg="StartContainer for \"44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc\"" Jul 6 23:28:20.464800 containerd[2034]: time="2025-07-06T23:28:20.464660603Z" level=info msg="connecting to shim 44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc" address="unix:///run/containerd/s/54ede778b0e2970155189693bff55713e36304b200519eec5b99ca68c7f95958" protocol=ttrpc version=3 Jul 6 23:28:20.504275 systemd[1]: Started cri-containerd-44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc.scope - libcontainer container 44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc. Jul 6 23:28:20.568414 containerd[2034]: time="2025-07-06T23:28:20.568284923Z" level=info msg="StartContainer for \"44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc\" returns successfully" Jul 6 23:28:29.315930 sudo[2379]: pam_unix(sudo:session): session closed for user root Jul 6 23:28:29.340979 sshd[2378]: Connection closed by 139.178.89.65 port 44494 Jul 6 23:28:29.341002 sshd-session[2376]: pam_unix(sshd:session): session closed for user core Jul 6 23:28:29.351717 systemd-logind[2000]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:28:29.353237 systemd[1]: sshd@8-172.31.24.125:22-139.178.89.65:44494.service: Deactivated successfully. Jul 6 23:28:29.363220 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:28:29.365107 systemd[1]: session-9.scope: Consumed 12.051s CPU time, 235.3M memory peak. Jul 6 23:28:29.372020 systemd-logind[2000]: Removed session 9. Jul 6 23:28:41.842373 kubelet[3522]: I0706 23:28:41.842282 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-k2vfs" podStartSLOduration=22.811292371 podStartE2EDuration="24.842261469s" podCreationTimestamp="2025-07-06 23:28:17 +0000 UTC" firstStartedPulling="2025-07-06 23:28:18.379586732 +0000 UTC m=+6.888900407" lastFinishedPulling="2025-07-06 23:28:20.41055583 +0000 UTC m=+8.919869505" observedRunningTime="2025-07-06 23:28:20.888432661 +0000 UTC m=+9.397746528" watchObservedRunningTime="2025-07-06 23:28:41.842261469 +0000 UTC m=+30.351575168" Jul 6 23:28:41.857813 systemd[1]: Created slice kubepods-besteffort-podd019250d_f045_40a8_b1dd_90318926f251.slice - libcontainer container kubepods-besteffort-podd019250d_f045_40a8_b1dd_90318926f251.slice. Jul 6 23:28:41.880716 kubelet[3522]: I0706 23:28:41.880646 3522 status_manager.go:890] "Failed to get status for pod" podUID="d019250d-f045-40a8-b1dd-90318926f251" pod="calico-system/calico-typha-67956b97fc-smjt6" err="pods \"calico-typha-67956b97fc-smjt6\" is forbidden: User \"system:node:ip-172-31-24-125\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-24-125' and this object" Jul 6 23:28:41.882117 kubelet[3522]: W0706 23:28:41.882063 3522 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-24-125" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-24-125' and this object Jul 6 23:28:41.882291 kubelet[3522]: E0706 23:28:41.882134 3522 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-24-125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-24-125' and this object" logger="UnhandledError" Jul 6 23:28:41.882621 kubelet[3522]: W0706 23:28:41.882594 3522 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-24-125" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-24-125' and this object Jul 6 23:28:41.882795 kubelet[3522]: E0706 23:28:41.882765 3522 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ip-172-31-24-125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-24-125' and this object" logger="UnhandledError" Jul 6 23:28:41.882972 kubelet[3522]: W0706 23:28:41.882592 3522 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-24-125" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-24-125' and this object Jul 6 23:28:41.883085 kubelet[3522]: E0706 23:28:41.882929 3522 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ip-172-31-24-125\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-24-125' and this object" logger="UnhandledError" Jul 6 23:28:41.905887 kubelet[3522]: I0706 23:28:41.905844 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d019250d-f045-40a8-b1dd-90318926f251-typha-certs\") pod \"calico-typha-67956b97fc-smjt6\" (UID: \"d019250d-f045-40a8-b1dd-90318926f251\") " pod="calico-system/calico-typha-67956b97fc-smjt6" Jul 6 23:28:41.906484 kubelet[3522]: I0706 23:28:41.906144 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dngzw\" (UniqueName: \"kubernetes.io/projected/d019250d-f045-40a8-b1dd-90318926f251-kube-api-access-dngzw\") pod \"calico-typha-67956b97fc-smjt6\" (UID: \"d019250d-f045-40a8-b1dd-90318926f251\") " pod="calico-system/calico-typha-67956b97fc-smjt6" Jul 6 23:28:41.906773 kubelet[3522]: I0706 23:28:41.906247 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d019250d-f045-40a8-b1dd-90318926f251-tigera-ca-bundle\") pod \"calico-typha-67956b97fc-smjt6\" (UID: \"d019250d-f045-40a8-b1dd-90318926f251\") " pod="calico-system/calico-typha-67956b97fc-smjt6" Jul 6 23:28:42.112985 systemd[1]: Created slice kubepods-besteffort-podac603eb4_138a_48d9_80f8_5be5d6b6b0eb.slice - libcontainer container kubepods-besteffort-podac603eb4_138a_48d9_80f8_5be5d6b6b0eb.slice. Jul 6 23:28:42.208835 kubelet[3522]: I0706 23:28:42.208762 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ac603eb4-138a-48d9-80f8-5be5d6b6b0eb-var-lib-calico\") pod \"calico-node-7qlnx\" (UID: \"ac603eb4-138a-48d9-80f8-5be5d6b6b0eb\") " pod="calico-system/calico-node-7qlnx" Jul 6 23:28:42.208835 kubelet[3522]: I0706 23:28:42.208835 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ac603eb4-138a-48d9-80f8-5be5d6b6b0eb-var-run-calico\") pod \"calico-node-7qlnx\" (UID: \"ac603eb4-138a-48d9-80f8-5be5d6b6b0eb\") " pod="calico-system/calico-node-7qlnx" Jul 6 23:28:42.209128 kubelet[3522]: I0706 23:28:42.208881 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ac603eb4-138a-48d9-80f8-5be5d6b6b0eb-cni-log-dir\") pod \"calico-node-7qlnx\" (UID: \"ac603eb4-138a-48d9-80f8-5be5d6b6b0eb\") " pod="calico-system/calico-node-7qlnx" Jul 6 23:28:42.209128 kubelet[3522]: I0706 23:28:42.208919 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac603eb4-138a-48d9-80f8-5be5d6b6b0eb-lib-modules\") pod \"calico-node-7qlnx\" (UID: \"ac603eb4-138a-48d9-80f8-5be5d6b6b0eb\") " pod="calico-system/calico-node-7qlnx" Jul 6 23:28:42.209128 kubelet[3522]: I0706 23:28:42.208978 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ac603eb4-138a-48d9-80f8-5be5d6b6b0eb-policysync\") pod \"calico-node-7qlnx\" (UID: \"ac603eb4-138a-48d9-80f8-5be5d6b6b0eb\") " pod="calico-system/calico-node-7qlnx" Jul 6 23:28:42.209128 kubelet[3522]: I0706 23:28:42.209019 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ac603eb4-138a-48d9-80f8-5be5d6b6b0eb-cni-net-dir\") pod \"calico-node-7qlnx\" (UID: \"ac603eb4-138a-48d9-80f8-5be5d6b6b0eb\") " pod="calico-system/calico-node-7qlnx" Jul 6 23:28:42.209128 kubelet[3522]: I0706 23:28:42.209073 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac603eb4-138a-48d9-80f8-5be5d6b6b0eb-tigera-ca-bundle\") pod \"calico-node-7qlnx\" (UID: \"ac603eb4-138a-48d9-80f8-5be5d6b6b0eb\") " pod="calico-system/calico-node-7qlnx" Jul 6 23:28:42.209396 kubelet[3522]: I0706 23:28:42.209132 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac603eb4-138a-48d9-80f8-5be5d6b6b0eb-xtables-lock\") pod \"calico-node-7qlnx\" (UID: \"ac603eb4-138a-48d9-80f8-5be5d6b6b0eb\") " pod="calico-system/calico-node-7qlnx" Jul 6 23:28:42.209396 kubelet[3522]: I0706 23:28:42.209173 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2k2d\" (UniqueName: \"kubernetes.io/projected/ac603eb4-138a-48d9-80f8-5be5d6b6b0eb-kube-api-access-l2k2d\") pod \"calico-node-7qlnx\" (UID: \"ac603eb4-138a-48d9-80f8-5be5d6b6b0eb\") " pod="calico-system/calico-node-7qlnx" Jul 6 23:28:42.209396 kubelet[3522]: I0706 23:28:42.209214 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ac603eb4-138a-48d9-80f8-5be5d6b6b0eb-cni-bin-dir\") pod \"calico-node-7qlnx\" (UID: \"ac603eb4-138a-48d9-80f8-5be5d6b6b0eb\") " pod="calico-system/calico-node-7qlnx" Jul 6 23:28:42.209396 kubelet[3522]: I0706 23:28:42.209254 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ac603eb4-138a-48d9-80f8-5be5d6b6b0eb-node-certs\") pod \"calico-node-7qlnx\" (UID: \"ac603eb4-138a-48d9-80f8-5be5d6b6b0eb\") " pod="calico-system/calico-node-7qlnx" Jul 6 23:28:42.209396 kubelet[3522]: I0706 23:28:42.209296 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ac603eb4-138a-48d9-80f8-5be5d6b6b0eb-flexvol-driver-host\") pod \"calico-node-7qlnx\" (UID: \"ac603eb4-138a-48d9-80f8-5be5d6b6b0eb\") " pod="calico-system/calico-node-7qlnx" Jul 6 23:28:42.238728 kubelet[3522]: E0706 23:28:42.238248 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7qxkc" podUID="6acec3dd-3f28-47f0-aa8c-d062fd8a3781" Jul 6 23:28:42.311467 kubelet[3522]: I0706 23:28:42.311399 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6acec3dd-3f28-47f0-aa8c-d062fd8a3781-socket-dir\") pod \"csi-node-driver-7qxkc\" (UID: \"6acec3dd-3f28-47f0-aa8c-d062fd8a3781\") " pod="calico-system/csi-node-driver-7qxkc" Jul 6 23:28:42.311624 kubelet[3522]: I0706 23:28:42.311493 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pqhk\" (UniqueName: \"kubernetes.io/projected/6acec3dd-3f28-47f0-aa8c-d062fd8a3781-kube-api-access-8pqhk\") pod \"csi-node-driver-7qxkc\" (UID: \"6acec3dd-3f28-47f0-aa8c-d062fd8a3781\") " pod="calico-system/csi-node-driver-7qxkc" Jul 6 23:28:42.311624 kubelet[3522]: I0706 23:28:42.311573 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6acec3dd-3f28-47f0-aa8c-d062fd8a3781-kubelet-dir\") pod \"csi-node-driver-7qxkc\" (UID: \"6acec3dd-3f28-47f0-aa8c-d062fd8a3781\") " pod="calico-system/csi-node-driver-7qxkc" Jul 6 23:28:42.311624 kubelet[3522]: I0706 23:28:42.311609 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6acec3dd-3f28-47f0-aa8c-d062fd8a3781-registration-dir\") pod \"csi-node-driver-7qxkc\" (UID: \"6acec3dd-3f28-47f0-aa8c-d062fd8a3781\") " pod="calico-system/csi-node-driver-7qxkc" Jul 6 23:28:42.311786 kubelet[3522]: I0706 23:28:42.311646 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6acec3dd-3f28-47f0-aa8c-d062fd8a3781-varrun\") pod \"csi-node-driver-7qxkc\" (UID: \"6acec3dd-3f28-47f0-aa8c-d062fd8a3781\") " pod="calico-system/csi-node-driver-7qxkc" Jul 6 23:28:42.326068 kubelet[3522]: E0706 23:28:42.325079 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.326068 kubelet[3522]: W0706 23:28:42.325131 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.326068 kubelet[3522]: E0706 23:28:42.325300 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.355327 kubelet[3522]: E0706 23:28:42.355270 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.355327 kubelet[3522]: W0706 23:28:42.355316 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.355538 kubelet[3522]: E0706 23:28:42.355353 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.413788 kubelet[3522]: E0706 23:28:42.413661 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.415431 kubelet[3522]: W0706 23:28:42.415363 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.415607 kubelet[3522]: E0706 23:28:42.415434 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.417732 kubelet[3522]: E0706 23:28:42.417669 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.417732 kubelet[3522]: W0706 23:28:42.417714 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.417983 kubelet[3522]: E0706 23:28:42.417761 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.419340 kubelet[3522]: E0706 23:28:42.419285 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.419340 kubelet[3522]: W0706 23:28:42.419327 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.419792 kubelet[3522]: E0706 23:28:42.419662 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.420119 kubelet[3522]: E0706 23:28:42.420082 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.420119 kubelet[3522]: W0706 23:28:42.420112 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.420685 kubelet[3522]: E0706 23:28:42.420640 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.421141 kubelet[3522]: E0706 23:28:42.421094 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.421141 kubelet[3522]: W0706 23:28:42.421132 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.421407 kubelet[3522]: E0706 23:28:42.421355 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.422357 kubelet[3522]: E0706 23:28:42.422294 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.422357 kubelet[3522]: W0706 23:28:42.422346 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.422902 kubelet[3522]: E0706 23:28:42.422865 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.423708 kubelet[3522]: E0706 23:28:42.423663 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.423708 kubelet[3522]: W0706 23:28:42.423699 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.424135 kubelet[3522]: E0706 23:28:42.424050 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.425067 kubelet[3522]: E0706 23:28:42.425000 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.425067 kubelet[3522]: W0706 23:28:42.425056 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.426178 kubelet[3522]: E0706 23:28:42.426114 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.427651 kubelet[3522]: E0706 23:28:42.427604 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.427651 kubelet[3522]: W0706 23:28:42.427644 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.428102 kubelet[3522]: E0706 23:28:42.428054 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.428261 kubelet[3522]: E0706 23:28:42.428225 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.428261 kubelet[3522]: W0706 23:28:42.428254 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.428518 kubelet[3522]: E0706 23:28:42.428312 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.429286 kubelet[3522]: E0706 23:28:42.429235 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.429286 kubelet[3522]: W0706 23:28:42.429274 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.429698 kubelet[3522]: E0706 23:28:42.429534 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.430275 kubelet[3522]: E0706 23:28:42.430230 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.430275 kubelet[3522]: W0706 23:28:42.430265 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.430703 kubelet[3522]: E0706 23:28:42.430659 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.431423 kubelet[3522]: E0706 23:28:42.431320 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.431423 kubelet[3522]: W0706 23:28:42.431414 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.432104 kubelet[3522]: E0706 23:28:42.432054 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.432408 kubelet[3522]: E0706 23:28:42.432350 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.432408 kubelet[3522]: W0706 23:28:42.432384 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.432408 kubelet[3522]: E0706 23:28:42.432446 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.433164 kubelet[3522]: E0706 23:28:42.433112 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.433164 kubelet[3522]: W0706 23:28:42.433148 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.433833 kubelet[3522]: E0706 23:28:42.433266 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.434364 kubelet[3522]: E0706 23:28:42.434323 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.434364 kubelet[3522]: W0706 23:28:42.434358 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.434683 kubelet[3522]: E0706 23:28:42.434614 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.436158 kubelet[3522]: E0706 23:28:42.436101 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.436158 kubelet[3522]: W0706 23:28:42.436143 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.436378 kubelet[3522]: E0706 23:28:42.436268 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.437121 kubelet[3522]: E0706 23:28:42.437075 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.437121 kubelet[3522]: W0706 23:28:42.437115 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.437446 kubelet[3522]: E0706 23:28:42.437404 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.439584 kubelet[3522]: E0706 23:28:42.439531 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.439584 kubelet[3522]: W0706 23:28:42.439571 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.441193 kubelet[3522]: E0706 23:28:42.439815 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.441593 kubelet[3522]: E0706 23:28:42.441547 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.441593 kubelet[3522]: W0706 23:28:42.441586 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.441744 kubelet[3522]: E0706 23:28:42.441715 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.442506 kubelet[3522]: E0706 23:28:42.442458 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.442506 kubelet[3522]: W0706 23:28:42.442495 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.442847 kubelet[3522]: E0706 23:28:42.442802 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.444049 kubelet[3522]: E0706 23:28:42.443887 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.444049 kubelet[3522]: W0706 23:28:42.444037 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.444298 kubelet[3522]: E0706 23:28:42.444257 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.444799 kubelet[3522]: E0706 23:28:42.444757 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.444904 kubelet[3522]: W0706 23:28:42.444809 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.445775 kubelet[3522]: E0706 23:28:42.445702 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.447182 kubelet[3522]: E0706 23:28:42.447131 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.447182 kubelet[3522]: W0706 23:28:42.447171 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.447397 kubelet[3522]: E0706 23:28:42.447218 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.447723 kubelet[3522]: E0706 23:28:42.447691 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.447723 kubelet[3522]: W0706 23:28:42.447718 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.447843 kubelet[3522]: E0706 23:28:42.447742 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.835074 kubelet[3522]: E0706 23:28:42.835011 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.835855 kubelet[3522]: W0706 23:28:42.835431 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.835855 kubelet[3522]: E0706 23:28:42.835504 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:42.837151 kubelet[3522]: E0706 23:28:42.837078 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:42.837151 kubelet[3522]: W0706 23:28:42.837108 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:42.837438 kubelet[3522]: E0706 23:28:42.837325 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.008588 kubelet[3522]: E0706 23:28:43.008515 3522 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jul 6 23:28:43.010150 kubelet[3522]: E0706 23:28:43.008658 3522 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d019250d-f045-40a8-b1dd-90318926f251-typha-certs podName:d019250d-f045-40a8-b1dd-90318926f251 nodeName:}" failed. No retries permitted until 2025-07-06 23:28:43.508623471 +0000 UTC m=+32.017937158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/d019250d-f045-40a8-b1dd-90318926f251-typha-certs") pod "calico-typha-67956b97fc-smjt6" (UID: "d019250d-f045-40a8-b1dd-90318926f251") : failed to sync secret cache: timed out waiting for the condition Jul 6 23:28:43.019624 kubelet[3522]: E0706 23:28:43.019208 3522 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 6 23:28:43.019624 kubelet[3522]: E0706 23:28:43.019262 3522 projected.go:194] Error preparing data for projected volume kube-api-access-dngzw for pod calico-system/calico-typha-67956b97fc-smjt6: failed to sync configmap cache: timed out waiting for the condition Jul 6 23:28:43.019624 kubelet[3522]: E0706 23:28:43.019358 3522 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d019250d-f045-40a8-b1dd-90318926f251-kube-api-access-dngzw podName:d019250d-f045-40a8-b1dd-90318926f251 nodeName:}" failed. No retries permitted until 2025-07-06 23:28:43.519329799 +0000 UTC m=+32.028643486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dngzw" (UniqueName: "kubernetes.io/projected/d019250d-f045-40a8-b1dd-90318926f251-kube-api-access-dngzw") pod "calico-typha-67956b97fc-smjt6" (UID: "d019250d-f045-40a8-b1dd-90318926f251") : failed to sync configmap cache: timed out waiting for the condition Jul 6 23:28:43.033580 kubelet[3522]: E0706 23:28:43.033540 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.033974 kubelet[3522]: W0706 23:28:43.033738 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.033974 kubelet[3522]: E0706 23:28:43.033780 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.034670 kubelet[3522]: E0706 23:28:43.034465 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.034670 kubelet[3522]: W0706 23:28:43.034493 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.034670 kubelet[3522]: E0706 23:28:43.034522 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.135987 kubelet[3522]: E0706 23:28:43.135830 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.135987 kubelet[3522]: W0706 23:28:43.135868 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.135987 kubelet[3522]: E0706 23:28:43.135902 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.137203 kubelet[3522]: E0706 23:28:43.137010 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.137203 kubelet[3522]: W0706 23:28:43.137057 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.137203 kubelet[3522]: E0706 23:28:43.137087 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.238976 kubelet[3522]: E0706 23:28:43.238766 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.238976 kubelet[3522]: W0706 23:28:43.238799 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.238976 kubelet[3522]: E0706 23:28:43.238829 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.239620 kubelet[3522]: E0706 23:28:43.239496 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.239620 kubelet[3522]: W0706 23:28:43.239523 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.239620 kubelet[3522]: E0706 23:28:43.239548 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.289989 kubelet[3522]: E0706 23:28:43.289149 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.289989 kubelet[3522]: W0706 23:28:43.289184 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.290467 kubelet[3522]: E0706 23:28:43.290229 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.291289 kubelet[3522]: E0706 23:28:43.291176 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.291289 kubelet[3522]: W0706 23:28:43.291204 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.291289 kubelet[3522]: E0706 23:28:43.291234 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.322373 containerd[2034]: time="2025-07-06T23:28:43.322297832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7qlnx,Uid:ac603eb4-138a-48d9-80f8-5be5d6b6b0eb,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:43.340978 kubelet[3522]: E0706 23:28:43.340879 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.340978 kubelet[3522]: W0706 23:28:43.340914 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.341433 kubelet[3522]: E0706 23:28:43.341279 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.342004 kubelet[3522]: E0706 23:28:43.341931 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.342291 kubelet[3522]: W0706 23:28:43.342148 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.342291 kubelet[3522]: E0706 23:28:43.342192 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.377278 containerd[2034]: time="2025-07-06T23:28:43.376764681Z" level=info msg="connecting to shim 048a518691e7355ab6af80303605afcb37677ad5a0ff4ba2fb74eedf29e85d08" address="unix:///run/containerd/s/8638eeb44030b819ce0352c6b48a7cae0c0b70bb81ca7589dbfa54c30ac02a10" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:43.436373 systemd[1]: Started cri-containerd-048a518691e7355ab6af80303605afcb37677ad5a0ff4ba2fb74eedf29e85d08.scope - libcontainer container 048a518691e7355ab6af80303605afcb37677ad5a0ff4ba2fb74eedf29e85d08. Jul 6 23:28:43.445627 kubelet[3522]: E0706 23:28:43.445343 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.445627 kubelet[3522]: W0706 23:28:43.445383 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.445627 kubelet[3522]: E0706 23:28:43.445434 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.446513 kubelet[3522]: E0706 23:28:43.446476 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.446513 kubelet[3522]: W0706 23:28:43.446506 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.446715 kubelet[3522]: E0706 23:28:43.446539 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.513606 containerd[2034]: time="2025-07-06T23:28:43.513521505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7qlnx,Uid:ac603eb4-138a-48d9-80f8-5be5d6b6b0eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"048a518691e7355ab6af80303605afcb37677ad5a0ff4ba2fb74eedf29e85d08\"" Jul 6 23:28:43.517771 containerd[2034]: time="2025-07-06T23:28:43.517590513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:28:43.547807 kubelet[3522]: E0706 23:28:43.547740 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.547807 kubelet[3522]: W0706 23:28:43.547785 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.548266 kubelet[3522]: E0706 23:28:43.547823 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.548553 kubelet[3522]: E0706 23:28:43.548492 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.548553 kubelet[3522]: W0706 23:28:43.548542 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.549621 kubelet[3522]: E0706 23:28:43.548593 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.550255 kubelet[3522]: E0706 23:28:43.550125 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.550255 kubelet[3522]: W0706 23:28:43.550191 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.550597 kubelet[3522]: E0706 23:28:43.550497 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.551321 kubelet[3522]: E0706 23:28:43.551069 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.551321 kubelet[3522]: W0706 23:28:43.551322 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.551655 kubelet[3522]: E0706 23:28:43.551376 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.552379 kubelet[3522]: E0706 23:28:43.552329 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.552730 kubelet[3522]: W0706 23:28:43.552571 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.552730 kubelet[3522]: E0706 23:28:43.552668 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.553645 kubelet[3522]: E0706 23:28:43.553452 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.553645 kubelet[3522]: W0706 23:28:43.553520 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.553645 kubelet[3522]: E0706 23:28:43.553611 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.554701 kubelet[3522]: E0706 23:28:43.554626 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.555162 kubelet[3522]: W0706 23:28:43.554931 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.555162 kubelet[3522]: E0706 23:28:43.555039 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.555744 kubelet[3522]: E0706 23:28:43.555704 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.556303 kubelet[3522]: W0706 23:28:43.555981 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.556303 kubelet[3522]: E0706 23:28:43.556075 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.556713 kubelet[3522]: E0706 23:28:43.556680 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.556859 kubelet[3522]: W0706 23:28:43.556828 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.557066 kubelet[3522]: E0706 23:28:43.557020 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.558035 kubelet[3522]: E0706 23:28:43.557982 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.558854 kubelet[3522]: W0706 23:28:43.558232 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.558854 kubelet[3522]: E0706 23:28:43.558284 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.571310 kubelet[3522]: E0706 23:28:43.570694 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.573832 kubelet[3522]: W0706 23:28:43.572535 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.574507 kubelet[3522]: E0706 23:28:43.574145 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.586318 kubelet[3522]: E0706 23:28:43.586258 3522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:43.586318 kubelet[3522]: W0706 23:28:43.586304 3522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:43.586555 kubelet[3522]: E0706 23:28:43.586345 3522 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:43.668871 containerd[2034]: time="2025-07-06T23:28:43.668782474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67956b97fc-smjt6,Uid:d019250d-f045-40a8-b1dd-90318926f251,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:43.731778 containerd[2034]: time="2025-07-06T23:28:43.731561002Z" level=info msg="connecting to shim 3112686fc52ae86e36f1c19ac56b5d2753c5e2840d50dee80639d40e5bccc77a" address="unix:///run/containerd/s/b59dd73bf825e321283e63a4fcb6d09a90f5e85d8926c03205e2f0bc848f1405" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:43.757389 kubelet[3522]: E0706 23:28:43.757317 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7qxkc" podUID="6acec3dd-3f28-47f0-aa8c-d062fd8a3781" Jul 6 23:28:43.794258 systemd[1]: Started cri-containerd-3112686fc52ae86e36f1c19ac56b5d2753c5e2840d50dee80639d40e5bccc77a.scope - libcontainer container 3112686fc52ae86e36f1c19ac56b5d2753c5e2840d50dee80639d40e5bccc77a. Jul 6 23:28:43.880756 containerd[2034]: time="2025-07-06T23:28:43.880666571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67956b97fc-smjt6,Uid:d019250d-f045-40a8-b1dd-90318926f251,Namespace:calico-system,Attempt:0,} returns sandbox id \"3112686fc52ae86e36f1c19ac56b5d2753c5e2840d50dee80639d40e5bccc77a\"" Jul 6 23:28:44.778898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2919624609.mount: Deactivated successfully. Jul 6 23:28:44.911475 containerd[2034]: time="2025-07-06T23:28:44.911420892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:44.913558 containerd[2034]: time="2025-07-06T23:28:44.913517544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5636360" Jul 6 23:28:44.915720 containerd[2034]: time="2025-07-06T23:28:44.915678420Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:44.920208 containerd[2034]: time="2025-07-06T23:28:44.920146908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:44.921401 containerd[2034]: time="2025-07-06T23:28:44.921343848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.403580451s" Jul 6 23:28:44.921487 containerd[2034]: time="2025-07-06T23:28:44.921400356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 6 23:28:44.923842 containerd[2034]: time="2025-07-06T23:28:44.923775936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:28:44.929670 containerd[2034]: time="2025-07-06T23:28:44.929605692Z" level=info msg="CreateContainer within sandbox \"048a518691e7355ab6af80303605afcb37677ad5a0ff4ba2fb74eedf29e85d08\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:28:44.951161 containerd[2034]: time="2025-07-06T23:28:44.951110112Z" level=info msg="Container a1adc319f1226fecb8235d8993cc6010e8f04a9d7f5b6f068b7fae434aa451e5: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:44.984047 containerd[2034]: time="2025-07-06T23:28:44.983970985Z" level=info msg="CreateContainer within sandbox \"048a518691e7355ab6af80303605afcb37677ad5a0ff4ba2fb74eedf29e85d08\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a1adc319f1226fecb8235d8993cc6010e8f04a9d7f5b6f068b7fae434aa451e5\"" Jul 6 23:28:44.987303 containerd[2034]: time="2025-07-06T23:28:44.987245905Z" level=info msg="StartContainer for \"a1adc319f1226fecb8235d8993cc6010e8f04a9d7f5b6f068b7fae434aa451e5\"" Jul 6 23:28:44.992054 containerd[2034]: time="2025-07-06T23:28:44.991992889Z" level=info msg="connecting to shim a1adc319f1226fecb8235d8993cc6010e8f04a9d7f5b6f068b7fae434aa451e5" address="unix:///run/containerd/s/8638eeb44030b819ce0352c6b48a7cae0c0b70bb81ca7589dbfa54c30ac02a10" protocol=ttrpc version=3 Jul 6 23:28:45.031266 systemd[1]: Started cri-containerd-a1adc319f1226fecb8235d8993cc6010e8f04a9d7f5b6f068b7fae434aa451e5.scope - libcontainer container a1adc319f1226fecb8235d8993cc6010e8f04a9d7f5b6f068b7fae434aa451e5. Jul 6 23:28:45.110609 containerd[2034]: time="2025-07-06T23:28:45.110538033Z" level=info msg="StartContainer for \"a1adc319f1226fecb8235d8993cc6010e8f04a9d7f5b6f068b7fae434aa451e5\" returns successfully" Jul 6 23:28:45.141644 systemd[1]: cri-containerd-a1adc319f1226fecb8235d8993cc6010e8f04a9d7f5b6f068b7fae434aa451e5.scope: Deactivated successfully. Jul 6 23:28:45.149214 containerd[2034]: time="2025-07-06T23:28:45.149164113Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a1adc319f1226fecb8235d8993cc6010e8f04a9d7f5b6f068b7fae434aa451e5\" id:\"a1adc319f1226fecb8235d8993cc6010e8f04a9d7f5b6f068b7fae434aa451e5\" pid:4094 exited_at:{seconds:1751844525 nanos:148418181}" Jul 6 23:28:45.149477 containerd[2034]: time="2025-07-06T23:28:45.149395149Z" level=info msg="received exit event container_id:\"a1adc319f1226fecb8235d8993cc6010e8f04a9d7f5b6f068b7fae434aa451e5\" id:\"a1adc319f1226fecb8235d8993cc6010e8f04a9d7f5b6f068b7fae434aa451e5\" pid:4094 exited_at:{seconds:1751844525 nanos:148418181}" Jul 6 23:28:45.757277 kubelet[3522]: E0706 23:28:45.756684 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7qxkc" podUID="6acec3dd-3f28-47f0-aa8c-d062fd8a3781" Jul 6 23:28:47.404248 containerd[2034]: time="2025-07-06T23:28:47.403815817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:47.406031 containerd[2034]: time="2025-07-06T23:28:47.405957325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=31717828" Jul 6 23:28:47.408298 containerd[2034]: time="2025-07-06T23:28:47.408219229Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:47.413574 containerd[2034]: time="2025-07-06T23:28:47.413465881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:47.415038 containerd[2034]: time="2025-07-06T23:28:47.414848197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.491007869s" Jul 6 23:28:47.415038 containerd[2034]: time="2025-07-06T23:28:47.414911377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 6 23:28:47.419992 containerd[2034]: time="2025-07-06T23:28:47.418624273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:28:47.444243 containerd[2034]: time="2025-07-06T23:28:47.444175477Z" level=info msg="CreateContainer within sandbox \"3112686fc52ae86e36f1c19ac56b5d2753c5e2840d50dee80639d40e5bccc77a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:28:47.463373 containerd[2034]: time="2025-07-06T23:28:47.463304149Z" level=info msg="Container d67242844084e5c4a882fde41f3e9314ef0123fd5d2237f160101ab499818427: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:47.485363 containerd[2034]: time="2025-07-06T23:28:47.485280505Z" level=info msg="CreateContainer within sandbox \"3112686fc52ae86e36f1c19ac56b5d2753c5e2840d50dee80639d40e5bccc77a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d67242844084e5c4a882fde41f3e9314ef0123fd5d2237f160101ab499818427\"" Jul 6 23:28:47.486357 containerd[2034]: time="2025-07-06T23:28:47.486284605Z" level=info msg="StartContainer for \"d67242844084e5c4a882fde41f3e9314ef0123fd5d2237f160101ab499818427\"" Jul 6 23:28:47.489654 containerd[2034]: time="2025-07-06T23:28:47.489589333Z" level=info msg="connecting to shim d67242844084e5c4a882fde41f3e9314ef0123fd5d2237f160101ab499818427" address="unix:///run/containerd/s/b59dd73bf825e321283e63a4fcb6d09a90f5e85d8926c03205e2f0bc848f1405" protocol=ttrpc version=3 Jul 6 23:28:47.528661 systemd[1]: Started cri-containerd-d67242844084e5c4a882fde41f3e9314ef0123fd5d2237f160101ab499818427.scope - libcontainer container d67242844084e5c4a882fde41f3e9314ef0123fd5d2237f160101ab499818427. Jul 6 23:28:47.622141 containerd[2034]: time="2025-07-06T23:28:47.622008626Z" level=info msg="StartContainer for \"d67242844084e5c4a882fde41f3e9314ef0123fd5d2237f160101ab499818427\" returns successfully" Jul 6 23:28:47.757867 kubelet[3522]: E0706 23:28:47.757796 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7qxkc" podUID="6acec3dd-3f28-47f0-aa8c-d062fd8a3781" Jul 6 23:28:49.017992 kubelet[3522]: I0706 23:28:49.017746 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67956b97fc-smjt6" podStartSLOduration=4.483979043 podStartE2EDuration="8.017720617s" podCreationTimestamp="2025-07-06 23:28:41 +0000 UTC" firstStartedPulling="2025-07-06 23:28:43.883288583 +0000 UTC m=+32.392602270" lastFinishedPulling="2025-07-06 23:28:47.417030169 +0000 UTC m=+35.926343844" observedRunningTime="2025-07-06 23:28:48.018464052 +0000 UTC m=+36.527777751" watchObservedRunningTime="2025-07-06 23:28:49.017720617 +0000 UTC m=+37.527034304" Jul 6 23:28:49.760021 kubelet[3522]: E0706 23:28:49.757763 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7qxkc" podUID="6acec3dd-3f28-47f0-aa8c-d062fd8a3781" Jul 6 23:28:50.436379 containerd[2034]: time="2025-07-06T23:28:50.436316500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:50.440106 containerd[2034]: time="2025-07-06T23:28:50.440027140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 6 23:28:50.442637 containerd[2034]: time="2025-07-06T23:28:50.442526284Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:50.447604 containerd[2034]: time="2025-07-06T23:28:50.447491020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:50.449464 containerd[2034]: time="2025-07-06T23:28:50.449214976Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.030525843s" Jul 6 23:28:50.449464 containerd[2034]: time="2025-07-06T23:28:50.449281408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 6 23:28:50.456611 containerd[2034]: time="2025-07-06T23:28:50.456509248Z" level=info msg="CreateContainer within sandbox \"048a518691e7355ab6af80303605afcb37677ad5a0ff4ba2fb74eedf29e85d08\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:28:50.475970 containerd[2034]: time="2025-07-06T23:28:50.475865116Z" level=info msg="Container e54fde2f3f4a221af66665cfefe90bd25a0f8b69eb90ab8044f27cefa4914f50: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:50.502429 containerd[2034]: time="2025-07-06T23:28:50.502345060Z" level=info msg="CreateContainer within sandbox \"048a518691e7355ab6af80303605afcb37677ad5a0ff4ba2fb74eedf29e85d08\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e54fde2f3f4a221af66665cfefe90bd25a0f8b69eb90ab8044f27cefa4914f50\"" Jul 6 23:28:50.503895 containerd[2034]: time="2025-07-06T23:28:50.503792776Z" level=info msg="StartContainer for \"e54fde2f3f4a221af66665cfefe90bd25a0f8b69eb90ab8044f27cefa4914f50\"" Jul 6 23:28:50.508856 containerd[2034]: time="2025-07-06T23:28:50.508774504Z" level=info msg="connecting to shim e54fde2f3f4a221af66665cfefe90bd25a0f8b69eb90ab8044f27cefa4914f50" address="unix:///run/containerd/s/8638eeb44030b819ce0352c6b48a7cae0c0b70bb81ca7589dbfa54c30ac02a10" protocol=ttrpc version=3 Jul 6 23:28:50.554254 systemd[1]: Started cri-containerd-e54fde2f3f4a221af66665cfefe90bd25a0f8b69eb90ab8044f27cefa4914f50.scope - libcontainer container e54fde2f3f4a221af66665cfefe90bd25a0f8b69eb90ab8044f27cefa4914f50. Jul 6 23:28:50.643606 containerd[2034]: time="2025-07-06T23:28:50.643520813Z" level=info msg="StartContainer for \"e54fde2f3f4a221af66665cfefe90bd25a0f8b69eb90ab8044f27cefa4914f50\" returns successfully" Jul 6 23:28:51.554856 containerd[2034]: time="2025-07-06T23:28:51.554785409Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:28:51.559208 systemd[1]: cri-containerd-e54fde2f3f4a221af66665cfefe90bd25a0f8b69eb90ab8044f27cefa4914f50.scope: Deactivated successfully. Jul 6 23:28:51.559787 systemd[1]: cri-containerd-e54fde2f3f4a221af66665cfefe90bd25a0f8b69eb90ab8044f27cefa4914f50.scope: Consumed 963ms CPU time, 188.6M memory peak, 165.8M written to disk. Jul 6 23:28:51.566066 containerd[2034]: time="2025-07-06T23:28:51.565923017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e54fde2f3f4a221af66665cfefe90bd25a0f8b69eb90ab8044f27cefa4914f50\" id:\"e54fde2f3f4a221af66665cfefe90bd25a0f8b69eb90ab8044f27cefa4914f50\" pid:4198 exited_at:{seconds:1751844531 nanos:564838457}" Jul 6 23:28:51.566781 containerd[2034]: time="2025-07-06T23:28:51.566216489Z" level=info msg="received exit event container_id:\"e54fde2f3f4a221af66665cfefe90bd25a0f8b69eb90ab8044f27cefa4914f50\" id:\"e54fde2f3f4a221af66665cfefe90bd25a0f8b69eb90ab8044f27cefa4914f50\" pid:4198 exited_at:{seconds:1751844531 nanos:564838457}" Jul 6 23:28:51.569101 kubelet[3522]: I0706 23:28:51.567929 3522 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:28:51.639311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e54fde2f3f4a221af66665cfefe90bd25a0f8b69eb90ab8044f27cefa4914f50-rootfs.mount: Deactivated successfully. Jul 6 23:28:51.669709 systemd[1]: Created slice kubepods-burstable-pod28268df8_9281_4f79_a130_45c4535d7f25.slice - libcontainer container kubepods-burstable-pod28268df8_9281_4f79_a130_45c4535d7f25.slice. Jul 6 23:28:51.710125 systemd[1]: Created slice kubepods-besteffort-podddd89310_58db_47b0_a7b4_d9cde8e0d91b.slice - libcontainer container kubepods-besteffort-podddd89310_58db_47b0_a7b4_d9cde8e0d91b.slice. Jul 6 23:28:51.713134 kubelet[3522]: I0706 23:28:51.712825 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxfcj\" (UniqueName: \"kubernetes.io/projected/28268df8-9281-4f79-a130-45c4535d7f25-kube-api-access-jxfcj\") pod \"coredns-668d6bf9bc-7c4tq\" (UID: \"28268df8-9281-4f79-a130-45c4535d7f25\") " pod="kube-system/coredns-668d6bf9bc-7c4tq" Jul 6 23:28:51.713134 kubelet[3522]: I0706 23:28:51.712916 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28268df8-9281-4f79-a130-45c4535d7f25-config-volume\") pod \"coredns-668d6bf9bc-7c4tq\" (UID: \"28268df8-9281-4f79-a130-45c4535d7f25\") " pod="kube-system/coredns-668d6bf9bc-7c4tq" Jul 6 23:28:51.736436 systemd[1]: Created slice kubepods-burstable-pode0a04f64_8a0b_40ef_9fb0_940a3feff5bc.slice - libcontainer container kubepods-burstable-pode0a04f64_8a0b_40ef_9fb0_940a3feff5bc.slice. Jul 6 23:28:51.759547 kubelet[3522]: W0706 23:28:51.759484 3522 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ip-172-31-24-125" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-24-125' and this object Jul 6 23:28:51.761117 kubelet[3522]: E0706 23:28:51.761067 3522 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ip-172-31-24-125\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-24-125' and this object" logger="UnhandledError" Jul 6 23:28:51.773133 systemd[1]: Created slice kubepods-besteffort-podc466df60_ba1c_453f_9253_7f09b565b994.slice - libcontainer container kubepods-besteffort-podc466df60_ba1c_453f_9253_7f09b565b994.slice. Jul 6 23:28:51.807109 systemd[1]: Created slice kubepods-besteffort-pod681b5493_6ec2_48d8_b1bd_05c7e34a77d0.slice - libcontainer container kubepods-besteffort-pod681b5493_6ec2_48d8_b1bd_05c7e34a77d0.slice. Jul 6 23:28:51.813449 kubelet[3522]: I0706 23:28:51.813379 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nwn2\" (UniqueName: \"kubernetes.io/projected/fa2c4d51-a0cf-4405-85e5-c4308819e470-kube-api-access-7nwn2\") pod \"goldmane-768f4c5c69-kpgkp\" (UID: \"fa2c4d51-a0cf-4405-85e5-c4308819e470\") " pod="calico-system/goldmane-768f4c5c69-kpgkp" Jul 6 23:28:51.813605 kubelet[3522]: I0706 23:28:51.813484 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/681b5493-6ec2-48d8-b1bd-05c7e34a77d0-calico-apiserver-certs\") pod \"calico-apiserver-6777f4cb5-fz7lq\" (UID: \"681b5493-6ec2-48d8-b1bd-05c7e34a77d0\") " pod="calico-apiserver/calico-apiserver-6777f4cb5-fz7lq" Jul 6 23:28:51.813605 kubelet[3522]: I0706 23:28:51.813539 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrcx\" (UniqueName: \"kubernetes.io/projected/681b5493-6ec2-48d8-b1bd-05c7e34a77d0-kube-api-access-cqrcx\") pod \"calico-apiserver-6777f4cb5-fz7lq\" (UID: \"681b5493-6ec2-48d8-b1bd-05c7e34a77d0\") " pod="calico-apiserver/calico-apiserver-6777f4cb5-fz7lq" Jul 6 23:28:51.813605 kubelet[3522]: I0706 23:28:51.813583 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0a04f64-8a0b-40ef-9fb0-940a3feff5bc-config-volume\") pod \"coredns-668d6bf9bc-g4898\" (UID: \"e0a04f64-8a0b-40ef-9fb0-940a3feff5bc\") " pod="kube-system/coredns-668d6bf9bc-g4898" Jul 6 23:28:51.813788 kubelet[3522]: I0706 23:28:51.813644 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa2c4d51-a0cf-4405-85e5-c4308819e470-config\") pod \"goldmane-768f4c5c69-kpgkp\" (UID: \"fa2c4d51-a0cf-4405-85e5-c4308819e470\") " pod="calico-system/goldmane-768f4c5c69-kpgkp" Jul 6 23:28:51.813788 kubelet[3522]: I0706 23:28:51.813690 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl6ws\" (UniqueName: \"kubernetes.io/projected/e0a04f64-8a0b-40ef-9fb0-940a3feff5bc-kube-api-access-sl6ws\") pod \"coredns-668d6bf9bc-g4898\" (UID: \"e0a04f64-8a0b-40ef-9fb0-940a3feff5bc\") " pod="kube-system/coredns-668d6bf9bc-g4898" Jul 6 23:28:51.813788 kubelet[3522]: I0706 23:28:51.813727 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fa2c4d51-a0cf-4405-85e5-c4308819e470-goldmane-key-pair\") pod \"goldmane-768f4c5c69-kpgkp\" (UID: \"fa2c4d51-a0cf-4405-85e5-c4308819e470\") " pod="calico-system/goldmane-768f4c5c69-kpgkp" Jul 6 23:28:51.813788 kubelet[3522]: I0706 23:28:51.813765 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c466df60-ba1c-453f-9253-7f09b565b994-whisker-ca-bundle\") pod \"whisker-544d97f4f6-sr6f4\" (UID: \"c466df60-ba1c-453f-9253-7f09b565b994\") " pod="calico-system/whisker-544d97f4f6-sr6f4" Jul 6 23:28:51.814695 kubelet[3522]: I0706 23:28:51.813802 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzjw4\" (UniqueName: \"kubernetes.io/projected/ddd89310-58db-47b0-a7b4-d9cde8e0d91b-kube-api-access-lzjw4\") pod \"calico-kube-controllers-5cb89dfdd6-4n8l4\" (UID: \"ddd89310-58db-47b0-a7b4-d9cde8e0d91b\") " pod="calico-system/calico-kube-controllers-5cb89dfdd6-4n8l4" Jul 6 23:28:51.814695 kubelet[3522]: I0706 23:28:51.813840 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k47kp\" (UniqueName: \"kubernetes.io/projected/c466df60-ba1c-453f-9253-7f09b565b994-kube-api-access-k47kp\") pod \"whisker-544d97f4f6-sr6f4\" (UID: \"c466df60-ba1c-453f-9253-7f09b565b994\") " pod="calico-system/whisker-544d97f4f6-sr6f4" Jul 6 23:28:51.814695 kubelet[3522]: I0706 23:28:51.813877 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa2c4d51-a0cf-4405-85e5-c4308819e470-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-kpgkp\" (UID: \"fa2c4d51-a0cf-4405-85e5-c4308819e470\") " pod="calico-system/goldmane-768f4c5c69-kpgkp" Jul 6 23:28:51.817496 kubelet[3522]: I0706 23:28:51.813922 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/715b37dd-4c55-4c71-8494-cc2f493772ba-calico-apiserver-certs\") pod \"calico-apiserver-6777f4cb5-jqnmg\" (UID: \"715b37dd-4c55-4c71-8494-cc2f493772ba\") " pod="calico-apiserver/calico-apiserver-6777f4cb5-jqnmg" Jul 6 23:28:51.819543 kubelet[3522]: I0706 23:28:51.818432 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6nx7\" (UniqueName: \"kubernetes.io/projected/715b37dd-4c55-4c71-8494-cc2f493772ba-kube-api-access-m6nx7\") pod \"calico-apiserver-6777f4cb5-jqnmg\" (UID: \"715b37dd-4c55-4c71-8494-cc2f493772ba\") " pod="calico-apiserver/calico-apiserver-6777f4cb5-jqnmg" Jul 6 23:28:51.819543 kubelet[3522]: I0706 23:28:51.818534 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddd89310-58db-47b0-a7b4-d9cde8e0d91b-tigera-ca-bundle\") pod \"calico-kube-controllers-5cb89dfdd6-4n8l4\" (UID: \"ddd89310-58db-47b0-a7b4-d9cde8e0d91b\") " pod="calico-system/calico-kube-controllers-5cb89dfdd6-4n8l4" Jul 6 23:28:51.819543 kubelet[3522]: I0706 23:28:51.818600 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c466df60-ba1c-453f-9253-7f09b565b994-whisker-backend-key-pair\") pod \"whisker-544d97f4f6-sr6f4\" (UID: \"c466df60-ba1c-453f-9253-7f09b565b994\") " pod="calico-system/whisker-544d97f4f6-sr6f4" Jul 6 23:28:51.826875 systemd[1]: Created slice kubepods-besteffort-podfa2c4d51_a0cf_4405_85e5_c4308819e470.slice - libcontainer container kubepods-besteffort-podfa2c4d51_a0cf_4405_85e5_c4308819e470.slice. Jul 6 23:28:51.842257 systemd[1]: Created slice kubepods-besteffort-pod715b37dd_4c55_4c71_8494_cc2f493772ba.slice - libcontainer container kubepods-besteffort-pod715b37dd_4c55_4c71_8494_cc2f493772ba.slice. Jul 6 23:28:51.866854 systemd[1]: Created slice kubepods-besteffort-pod6acec3dd_3f28_47f0_aa8c_d062fd8a3781.slice - libcontainer container kubepods-besteffort-pod6acec3dd_3f28_47f0_aa8c_d062fd8a3781.slice. Jul 6 23:28:51.873503 containerd[2034]: time="2025-07-06T23:28:51.873277951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7qxkc,Uid:6acec3dd-3f28-47f0-aa8c-d062fd8a3781,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:52.037159 containerd[2034]: time="2025-07-06T23:28:52.036224752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7c4tq,Uid:28268df8-9281-4f79-a130-45c4535d7f25,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:52.067464 containerd[2034]: time="2025-07-06T23:28:52.067289884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4898,Uid:e0a04f64-8a0b-40ef-9fb0-940a3feff5bc,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:52.125762 containerd[2034]: time="2025-07-06T23:28:52.125439856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6777f4cb5-fz7lq,Uid:681b5493-6ec2-48d8-b1bd-05c7e34a77d0,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:28:52.139877 containerd[2034]: time="2025-07-06T23:28:52.139812640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-kpgkp,Uid:fa2c4d51-a0cf-4405-85e5-c4308819e470,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:52.157809 containerd[2034]: time="2025-07-06T23:28:52.157753960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6777f4cb5-jqnmg,Uid:715b37dd-4c55-4c71-8494-cc2f493772ba,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:28:52.195421 containerd[2034]: time="2025-07-06T23:28:52.195361228Z" level=error msg="Failed to destroy network for sandbox \"c05891ecfa2dd7ef0af2fd637f557a1263e1f1913c23111cc56efd0418210089\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:52.324193 containerd[2034]: time="2025-07-06T23:28:52.324011201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb89dfdd6-4n8l4,Uid:ddd89310-58db-47b0-a7b4-d9cde8e0d91b,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:52.428487 containerd[2034]: time="2025-07-06T23:28:52.428419470Z" level=error msg="Failed to destroy network for sandbox \"9074d489048369e1164e858bb000e4563fe249126a590f6791750cfac423e0ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:52.654377 systemd[1]: run-netns-cni\x2de6c837d1\x2dd097\x2dddd4\x2de3e0\x2df1d24a6aabf4.mount: Deactivated successfully. Jul 6 23:28:52.785314 containerd[2034]: time="2025-07-06T23:28:52.785165419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7qxkc,Uid:6acec3dd-3f28-47f0-aa8c-d062fd8a3781,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c05891ecfa2dd7ef0af2fd637f557a1263e1f1913c23111cc56efd0418210089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:52.786825 kubelet[3522]: E0706 23:28:52.785640 3522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c05891ecfa2dd7ef0af2fd637f557a1263e1f1913c23111cc56efd0418210089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:52.786825 kubelet[3522]: E0706 23:28:52.786405 3522 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c05891ecfa2dd7ef0af2fd637f557a1263e1f1913c23111cc56efd0418210089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7qxkc" Jul 6 23:28:52.786825 kubelet[3522]: E0706 23:28:52.786447 3522 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c05891ecfa2dd7ef0af2fd637f557a1263e1f1913c23111cc56efd0418210089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7qxkc" Jul 6 23:28:52.787452 kubelet[3522]: E0706 23:28:52.786530 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7qxkc_calico-system(6acec3dd-3f28-47f0-aa8c-d062fd8a3781)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7qxkc_calico-system(6acec3dd-3f28-47f0-aa8c-d062fd8a3781)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c05891ecfa2dd7ef0af2fd637f557a1263e1f1913c23111cc56efd0418210089\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7qxkc" podUID="6acec3dd-3f28-47f0-aa8c-d062fd8a3781" Jul 6 23:28:52.836418 containerd[2034]: time="2025-07-06T23:28:52.836256404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7c4tq,Uid:28268df8-9281-4f79-a130-45c4535d7f25,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9074d489048369e1164e858bb000e4563fe249126a590f6791750cfac423e0ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:52.838062 kubelet[3522]: E0706 23:28:52.837468 3522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9074d489048369e1164e858bb000e4563fe249126a590f6791750cfac423e0ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:52.838062 kubelet[3522]: E0706 23:28:52.837553 3522 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9074d489048369e1164e858bb000e4563fe249126a590f6791750cfac423e0ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7c4tq" Jul 6 23:28:52.838062 kubelet[3522]: E0706 23:28:52.837588 3522 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9074d489048369e1164e858bb000e4563fe249126a590f6791750cfac423e0ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7c4tq" Jul 6 23:28:52.838334 kubelet[3522]: E0706 23:28:52.837658 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7c4tq_kube-system(28268df8-9281-4f79-a130-45c4535d7f25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7c4tq_kube-system(28268df8-9281-4f79-a130-45c4535d7f25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9074d489048369e1164e858bb000e4563fe249126a590f6791750cfac423e0ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7c4tq" podUID="28268df8-9281-4f79-a130-45c4535d7f25" Jul 6 23:28:52.935568 kubelet[3522]: E0706 23:28:52.934837 3522 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 6 23:28:52.936205 kubelet[3522]: E0706 23:28:52.935773 3522 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c466df60-ba1c-453f-9253-7f09b565b994-whisker-backend-key-pair podName:c466df60-ba1c-453f-9253-7f09b565b994 nodeName:}" failed. No retries permitted until 2025-07-06 23:28:53.43492836 +0000 UTC m=+41.944242059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/c466df60-ba1c-453f-9253-7f09b565b994-whisker-backend-key-pair") pod "whisker-544d97f4f6-sr6f4" (UID: "c466df60-ba1c-453f-9253-7f09b565b994") : failed to sync secret cache: timed out waiting for the condition Jul 6 23:28:53.042926 containerd[2034]: time="2025-07-06T23:28:53.042685997Z" level=error msg="Failed to destroy network for sandbox \"5e86d4e6ec7ed6ed272eb93afe395b5d6379fcb85fa821c86e8634a55e909a00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.048074 systemd[1]: run-netns-cni\x2df20ff7e7\x2d0cba\x2d2c65\x2dcc56\x2d07936020f433.mount: Deactivated successfully. Jul 6 23:28:53.054613 containerd[2034]: time="2025-07-06T23:28:53.054543509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4898,Uid:e0a04f64-8a0b-40ef-9fb0-940a3feff5bc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e86d4e6ec7ed6ed272eb93afe395b5d6379fcb85fa821c86e8634a55e909a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.055314 kubelet[3522]: E0706 23:28:53.055118 3522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e86d4e6ec7ed6ed272eb93afe395b5d6379fcb85fa821c86e8634a55e909a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.055314 kubelet[3522]: E0706 23:28:53.055207 3522 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e86d4e6ec7ed6ed272eb93afe395b5d6379fcb85fa821c86e8634a55e909a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g4898" Jul 6 23:28:53.055314 kubelet[3522]: E0706 23:28:53.055242 3522 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e86d4e6ec7ed6ed272eb93afe395b5d6379fcb85fa821c86e8634a55e909a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g4898" Jul 6 23:28:53.056174 kubelet[3522]: E0706 23:28:53.055314 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-g4898_kube-system(e0a04f64-8a0b-40ef-9fb0-940a3feff5bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-g4898_kube-system(e0a04f64-8a0b-40ef-9fb0-940a3feff5bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e86d4e6ec7ed6ed272eb93afe395b5d6379fcb85fa821c86e8634a55e909a00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g4898" podUID="e0a04f64-8a0b-40ef-9fb0-940a3feff5bc" Jul 6 23:28:53.090256 containerd[2034]: time="2025-07-06T23:28:53.089925173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:28:53.158890 containerd[2034]: time="2025-07-06T23:28:53.158636045Z" level=error msg="Failed to destroy network for sandbox \"1385e8cb57277b40b4bca6bb4b0505f72ebbc92ffcf1cedc87020701b009b120\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.161007 containerd[2034]: time="2025-07-06T23:28:53.159931325Z" level=error msg="Failed to destroy network for sandbox \"f58abdcd380f88593c93c8012e4e03e442c8220b343221c5db370dcc596b4276\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.164796 containerd[2034]: time="2025-07-06T23:28:53.164725241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-kpgkp,Uid:fa2c4d51-a0cf-4405-85e5-c4308819e470,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1385e8cb57277b40b4bca6bb4b0505f72ebbc92ffcf1cedc87020701b009b120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.167111 kubelet[3522]: E0706 23:28:53.167018 3522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1385e8cb57277b40b4bca6bb4b0505f72ebbc92ffcf1cedc87020701b009b120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.168896 kubelet[3522]: E0706 23:28:53.167253 3522 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1385e8cb57277b40b4bca6bb4b0505f72ebbc92ffcf1cedc87020701b009b120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-kpgkp" Jul 6 23:28:53.168896 kubelet[3522]: E0706 23:28:53.167412 3522 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1385e8cb57277b40b4bca6bb4b0505f72ebbc92ffcf1cedc87020701b009b120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-kpgkp" Jul 6 23:28:53.168896 kubelet[3522]: E0706 23:28:53.167587 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-kpgkp_calico-system(fa2c4d51-a0cf-4405-85e5-c4308819e470)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-kpgkp_calico-system(fa2c4d51-a0cf-4405-85e5-c4308819e470)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1385e8cb57277b40b4bca6bb4b0505f72ebbc92ffcf1cedc87020701b009b120\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-kpgkp" podUID="fa2c4d51-a0cf-4405-85e5-c4308819e470" Jul 6 23:28:53.168408 systemd[1]: run-netns-cni\x2da850a11d\x2d06a3\x2dfecf\x2d959e\x2dddd659832d06.mount: Deactivated successfully. Jul 6 23:28:53.168600 systemd[1]: run-netns-cni\x2dcb6778c6\x2d9911\x2dde2d\x2d2149\x2d73a25bf467ea.mount: Deactivated successfully. Jul 6 23:28:53.169568 containerd[2034]: time="2025-07-06T23:28:53.169346489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6777f4cb5-jqnmg,Uid:715b37dd-4c55-4c71-8494-cc2f493772ba,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f58abdcd380f88593c93c8012e4e03e442c8220b343221c5db370dcc596b4276\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.172699 kubelet[3522]: E0706 23:28:53.170274 3522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f58abdcd380f88593c93c8012e4e03e442c8220b343221c5db370dcc596b4276\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.172699 kubelet[3522]: E0706 23:28:53.170801 3522 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f58abdcd380f88593c93c8012e4e03e442c8220b343221c5db370dcc596b4276\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6777f4cb5-jqnmg" Jul 6 23:28:53.172699 kubelet[3522]: E0706 23:28:53.171064 3522 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f58abdcd380f88593c93c8012e4e03e442c8220b343221c5db370dcc596b4276\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6777f4cb5-jqnmg" Jul 6 23:28:53.173413 kubelet[3522]: E0706 23:28:53.171491 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6777f4cb5-jqnmg_calico-apiserver(715b37dd-4c55-4c71-8494-cc2f493772ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6777f4cb5-jqnmg_calico-apiserver(715b37dd-4c55-4c71-8494-cc2f493772ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f58abdcd380f88593c93c8012e4e03e442c8220b343221c5db370dcc596b4276\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6777f4cb5-jqnmg" podUID="715b37dd-4c55-4c71-8494-cc2f493772ba" Jul 6 23:28:53.188922 containerd[2034]: time="2025-07-06T23:28:53.187511669Z" level=error msg="Failed to destroy network for sandbox \"02f1ea76b53370f1d381dd733635f428f490ae58dbaf57a83ccc35333dd27e5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.191370 containerd[2034]: time="2025-07-06T23:28:53.191283641Z" level=error msg="Failed to destroy network for sandbox \"ace3b71730cc67c5188bb0ca0da1f39e794d49aa9acba9371e8099f41f914b97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.192018 containerd[2034]: time="2025-07-06T23:28:53.191669861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6777f4cb5-fz7lq,Uid:681b5493-6ec2-48d8-b1bd-05c7e34a77d0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f1ea76b53370f1d381dd733635f428f490ae58dbaf57a83ccc35333dd27e5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.192890 kubelet[3522]: E0706 23:28:53.192333 3522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f1ea76b53370f1d381dd733635f428f490ae58dbaf57a83ccc35333dd27e5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.192890 kubelet[3522]: E0706 23:28:53.192413 3522 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f1ea76b53370f1d381dd733635f428f490ae58dbaf57a83ccc35333dd27e5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6777f4cb5-fz7lq" Jul 6 23:28:53.192890 kubelet[3522]: E0706 23:28:53.192450 3522 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f1ea76b53370f1d381dd733635f428f490ae58dbaf57a83ccc35333dd27e5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6777f4cb5-fz7lq" Jul 6 23:28:53.193781 kubelet[3522]: E0706 23:28:53.192523 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6777f4cb5-fz7lq_calico-apiserver(681b5493-6ec2-48d8-b1bd-05c7e34a77d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6777f4cb5-fz7lq_calico-apiserver(681b5493-6ec2-48d8-b1bd-05c7e34a77d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02f1ea76b53370f1d381dd733635f428f490ae58dbaf57a83ccc35333dd27e5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6777f4cb5-fz7lq" podUID="681b5493-6ec2-48d8-b1bd-05c7e34a77d0" Jul 6 23:28:53.194464 containerd[2034]: time="2025-07-06T23:28:53.194306429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb89dfdd6-4n8l4,Uid:ddd89310-58db-47b0-a7b4-d9cde8e0d91b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace3b71730cc67c5188bb0ca0da1f39e794d49aa9acba9371e8099f41f914b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.195295 kubelet[3522]: E0706 23:28:53.195191 3522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace3b71730cc67c5188bb0ca0da1f39e794d49aa9acba9371e8099f41f914b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.195295 kubelet[3522]: E0706 23:28:53.195263 3522 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace3b71730cc67c5188bb0ca0da1f39e794d49aa9acba9371e8099f41f914b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cb89dfdd6-4n8l4" Jul 6 23:28:53.195577 kubelet[3522]: E0706 23:28:53.195304 3522 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace3b71730cc67c5188bb0ca0da1f39e794d49aa9acba9371e8099f41f914b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cb89dfdd6-4n8l4" Jul 6 23:28:53.195577 kubelet[3522]: E0706 23:28:53.195368 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cb89dfdd6-4n8l4_calico-system(ddd89310-58db-47b0-a7b4-d9cde8e0d91b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cb89dfdd6-4n8l4_calico-system(ddd89310-58db-47b0-a7b4-d9cde8e0d91b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ace3b71730cc67c5188bb0ca0da1f39e794d49aa9acba9371e8099f41f914b97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cb89dfdd6-4n8l4" podUID="ddd89310-58db-47b0-a7b4-d9cde8e0d91b" Jul 6 23:28:53.582484 containerd[2034]: time="2025-07-06T23:28:53.582404311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-544d97f4f6-sr6f4,Uid:c466df60-ba1c-453f-9253-7f09b565b994,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:53.640422 systemd[1]: run-netns-cni\x2dec49bca3\x2d92ee\x2d2e15\x2d690c\x2d409f20340e9d.mount: Deactivated successfully. Jul 6 23:28:53.640605 systemd[1]: run-netns-cni\x2ded1450ab\x2ddc62\x2db8b9\x2d35e5\x2d08eae00256ea.mount: Deactivated successfully. Jul 6 23:28:53.686568 containerd[2034]: time="2025-07-06T23:28:53.686437280Z" level=error msg="Failed to destroy network for sandbox \"1dc9f59b4961aab8a95fd2f061b33ae755534954621e760e26d82264c80993ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.690446 containerd[2034]: time="2025-07-06T23:28:53.690376568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-544d97f4f6-sr6f4,Uid:c466df60-ba1c-453f-9253-7f09b565b994,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dc9f59b4961aab8a95fd2f061b33ae755534954621e760e26d82264c80993ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.691429 kubelet[3522]: E0706 23:28:53.691375 3522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dc9f59b4961aab8a95fd2f061b33ae755534954621e760e26d82264c80993ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:53.692547 kubelet[3522]: E0706 23:28:53.692505 3522 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dc9f59b4961aab8a95fd2f061b33ae755534954621e760e26d82264c80993ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-544d97f4f6-sr6f4" Jul 6 23:28:53.694065 kubelet[3522]: E0706 23:28:53.692701 3522 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dc9f59b4961aab8a95fd2f061b33ae755534954621e760e26d82264c80993ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-544d97f4f6-sr6f4" Jul 6 23:28:53.694065 kubelet[3522]: E0706 23:28:53.692804 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-544d97f4f6-sr6f4_calico-system(c466df60-ba1c-453f-9253-7f09b565b994)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-544d97f4f6-sr6f4_calico-system(c466df60-ba1c-453f-9253-7f09b565b994)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1dc9f59b4961aab8a95fd2f061b33ae755534954621e760e26d82264c80993ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-544d97f4f6-sr6f4" podUID="c466df60-ba1c-453f-9253-7f09b565b994" Jul 6 23:28:53.693070 systemd[1]: run-netns-cni\x2deb3407ec\x2d61a4\x2de168\x2d9dea\x2d915176247657.mount: Deactivated successfully. Jul 6 23:28:59.188563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2781388547.mount: Deactivated successfully. Jul 6 23:28:59.262463 containerd[2034]: time="2025-07-06T23:28:59.262034303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:59.264726 containerd[2034]: time="2025-07-06T23:28:59.264203819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 6 23:28:59.268189 containerd[2034]: time="2025-07-06T23:28:59.268120595Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:59.272044 containerd[2034]: time="2025-07-06T23:28:59.271910124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:59.273075 containerd[2034]: time="2025-07-06T23:28:59.272904756Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 6.182671123s" Jul 6 23:28:59.273075 containerd[2034]: time="2025-07-06T23:28:59.272992356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 6 23:28:59.314480 containerd[2034]: time="2025-07-06T23:28:59.314391348Z" level=info msg="CreateContainer within sandbox \"048a518691e7355ab6af80303605afcb37677ad5a0ff4ba2fb74eedf29e85d08\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:28:59.338987 containerd[2034]: time="2025-07-06T23:28:59.335463732Z" level=info msg="Container 12bb4e5a2182d80b0ed0e51d226ee2740e009426df04eb1fb19344aef1596423: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:59.348013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240080926.mount: Deactivated successfully. Jul 6 23:28:59.360964 containerd[2034]: time="2025-07-06T23:28:59.360873516Z" level=info msg="CreateContainer within sandbox \"048a518691e7355ab6af80303605afcb37677ad5a0ff4ba2fb74eedf29e85d08\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"12bb4e5a2182d80b0ed0e51d226ee2740e009426df04eb1fb19344aef1596423\"" Jul 6 23:28:59.361838 containerd[2034]: time="2025-07-06T23:28:59.361783344Z" level=info msg="StartContainer for \"12bb4e5a2182d80b0ed0e51d226ee2740e009426df04eb1fb19344aef1596423\"" Jul 6 23:28:59.365271 containerd[2034]: time="2025-07-06T23:28:59.365210988Z" level=info msg="connecting to shim 12bb4e5a2182d80b0ed0e51d226ee2740e009426df04eb1fb19344aef1596423" address="unix:///run/containerd/s/8638eeb44030b819ce0352c6b48a7cae0c0b70bb81ca7589dbfa54c30ac02a10" protocol=ttrpc version=3 Jul 6 23:28:59.442835 systemd[1]: Started cri-containerd-12bb4e5a2182d80b0ed0e51d226ee2740e009426df04eb1fb19344aef1596423.scope - libcontainer container 12bb4e5a2182d80b0ed0e51d226ee2740e009426df04eb1fb19344aef1596423. Jul 6 23:28:59.558246 containerd[2034]: time="2025-07-06T23:28:59.557767753Z" level=info msg="StartContainer for \"12bb4e5a2182d80b0ed0e51d226ee2740e009426df04eb1fb19344aef1596423\" returns successfully" Jul 6 23:28:59.839797 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:28:59.840010 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:29:00.176712 kubelet[3522]: I0706 23:29:00.176266 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7qlnx" podStartSLOduration=2.417942669 podStartE2EDuration="18.176233536s" podCreationTimestamp="2025-07-06 23:28:42 +0000 UTC" firstStartedPulling="2025-07-06 23:28:43.516766245 +0000 UTC m=+32.026079920" lastFinishedPulling="2025-07-06 23:28:59.2750571 +0000 UTC m=+47.784370787" observedRunningTime="2025-07-06 23:29:00.175404696 +0000 UTC m=+48.684718383" watchObservedRunningTime="2025-07-06 23:29:00.176233536 +0000 UTC m=+48.685547259" Jul 6 23:29:00.192705 kubelet[3522]: I0706 23:29:00.191774 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c466df60-ba1c-453f-9253-7f09b565b994-whisker-backend-key-pair\") pod \"c466df60-ba1c-453f-9253-7f09b565b994\" (UID: \"c466df60-ba1c-453f-9253-7f09b565b994\") " Jul 6 23:29:00.192705 kubelet[3522]: I0706 23:29:00.191846 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c466df60-ba1c-453f-9253-7f09b565b994-whisker-ca-bundle\") pod \"c466df60-ba1c-453f-9253-7f09b565b994\" (UID: \"c466df60-ba1c-453f-9253-7f09b565b994\") " Jul 6 23:29:00.192705 kubelet[3522]: I0706 23:29:00.191886 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k47kp\" (UniqueName: \"kubernetes.io/projected/c466df60-ba1c-453f-9253-7f09b565b994-kube-api-access-k47kp\") pod \"c466df60-ba1c-453f-9253-7f09b565b994\" (UID: \"c466df60-ba1c-453f-9253-7f09b565b994\") " Jul 6 23:29:00.198364 kubelet[3522]: I0706 23:29:00.198293 3522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c466df60-ba1c-453f-9253-7f09b565b994-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c466df60-ba1c-453f-9253-7f09b565b994" (UID: "c466df60-ba1c-453f-9253-7f09b565b994"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:29:00.222149 systemd[1]: var-lib-kubelet-pods-c466df60\x2dba1c\x2d453f\x2d9253\x2d7f09b565b994-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:29:00.223589 kubelet[3522]: I0706 23:29:00.223222 3522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c466df60-ba1c-453f-9253-7f09b565b994-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c466df60-ba1c-453f-9253-7f09b565b994" (UID: "c466df60-ba1c-453f-9253-7f09b565b994"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:29:00.223589 kubelet[3522]: I0706 23:29:00.223466 3522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c466df60-ba1c-453f-9253-7f09b565b994-kube-api-access-k47kp" (OuterVolumeSpecName: "kube-api-access-k47kp") pod "c466df60-ba1c-453f-9253-7f09b565b994" (UID: "c466df60-ba1c-453f-9253-7f09b565b994"). InnerVolumeSpecName "kube-api-access-k47kp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:29:00.237883 systemd[1]: var-lib-kubelet-pods-c466df60\x2dba1c\x2d453f\x2d9253\x2d7f09b565b994-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk47kp.mount: Deactivated successfully. Jul 6 23:29:00.298094 kubelet[3522]: I0706 23:29:00.297977 3522 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c466df60-ba1c-453f-9253-7f09b565b994-whisker-backend-key-pair\") on node \"ip-172-31-24-125\" DevicePath \"\"" Jul 6 23:29:00.298094 kubelet[3522]: I0706 23:29:00.298034 3522 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c466df60-ba1c-453f-9253-7f09b565b994-whisker-ca-bundle\") on node \"ip-172-31-24-125\" DevicePath \"\"" Jul 6 23:29:00.298094 kubelet[3522]: I0706 23:29:00.298058 3522 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k47kp\" (UniqueName: \"kubernetes.io/projected/c466df60-ba1c-453f-9253-7f09b565b994-kube-api-access-k47kp\") on node \"ip-172-31-24-125\" DevicePath \"\"" Jul 6 23:29:00.425872 containerd[2034]: time="2025-07-06T23:29:00.425722273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12bb4e5a2182d80b0ed0e51d226ee2740e009426df04eb1fb19344aef1596423\" id:\"967a69e8c80a54a48849b136c2dc7608acddbe4d40e33adf05d3b7d51bda962d\" pid:4509 exit_status:1 exited_at:{seconds:1751844540 nanos:424843813}" Jul 6 23:29:00.445422 systemd[1]: Removed slice kubepods-besteffort-podc466df60_ba1c_453f_9253_7f09b565b994.slice - libcontainer container kubepods-besteffort-podc466df60_ba1c_453f_9253_7f09b565b994.slice. Jul 6 23:29:00.571401 systemd[1]: Created slice kubepods-besteffort-pod681634e7_eb51_4901_b83b_07026ff8db16.slice - libcontainer container kubepods-besteffort-pod681634e7_eb51_4901_b83b_07026ff8db16.slice. Jul 6 23:29:00.599969 kubelet[3522]: I0706 23:29:00.599062 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bccnb\" (UniqueName: \"kubernetes.io/projected/681634e7-eb51-4901-b83b-07026ff8db16-kube-api-access-bccnb\") pod \"whisker-5987859cb8-km2ch\" (UID: \"681634e7-eb51-4901-b83b-07026ff8db16\") " pod="calico-system/whisker-5987859cb8-km2ch" Jul 6 23:29:00.599969 kubelet[3522]: I0706 23:29:00.599389 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/681634e7-eb51-4901-b83b-07026ff8db16-whisker-backend-key-pair\") pod \"whisker-5987859cb8-km2ch\" (UID: \"681634e7-eb51-4901-b83b-07026ff8db16\") " pod="calico-system/whisker-5987859cb8-km2ch" Jul 6 23:29:00.599969 kubelet[3522]: I0706 23:29:00.599479 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/681634e7-eb51-4901-b83b-07026ff8db16-whisker-ca-bundle\") pod \"whisker-5987859cb8-km2ch\" (UID: \"681634e7-eb51-4901-b83b-07026ff8db16\") " pod="calico-system/whisker-5987859cb8-km2ch" Jul 6 23:29:00.880809 containerd[2034]: time="2025-07-06T23:29:00.880753155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5987859cb8-km2ch,Uid:681634e7-eb51-4901-b83b-07026ff8db16,Namespace:calico-system,Attempt:0,}" Jul 6 23:29:01.217382 (udev-worker)[4485]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:29:01.222021 systemd-networkd[1820]: cali704e56fa810: Link UP Jul 6 23:29:01.223378 systemd-networkd[1820]: cali704e56fa810: Gained carrier Jul 6 23:29:01.259088 containerd[2034]: 2025-07-06 23:29:00.923 [INFO][4537] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:29:01.259088 containerd[2034]: 2025-07-06 23:29:01.021 [INFO][4537] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--125-k8s-whisker--5987859cb8--km2ch-eth0 whisker-5987859cb8- calico-system 681634e7-eb51-4901-b83b-07026ff8db16 922 0 2025-07-06 23:29:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5987859cb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-24-125 whisker-5987859cb8-km2ch eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali704e56fa810 [] [] }} ContainerID="32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" Namespace="calico-system" Pod="whisker-5987859cb8-km2ch" WorkloadEndpoint="ip--172--31--24--125-k8s-whisker--5987859cb8--km2ch-" Jul 6 23:29:01.259088 containerd[2034]: 2025-07-06 23:29:01.021 [INFO][4537] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" Namespace="calico-system" Pod="whisker-5987859cb8-km2ch" WorkloadEndpoint="ip--172--31--24--125-k8s-whisker--5987859cb8--km2ch-eth0" Jul 6 23:29:01.259088 containerd[2034]: 2025-07-06 23:29:01.105 [INFO][4549] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" HandleID="k8s-pod-network.32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" Workload="ip--172--31--24--125-k8s-whisker--5987859cb8--km2ch-eth0" Jul 6 23:29:01.259685 containerd[2034]: 2025-07-06 23:29:01.105 [INFO][4549] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" HandleID="k8s-pod-network.32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" Workload="ip--172--31--24--125-k8s-whisker--5987859cb8--km2ch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003297c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-125", "pod":"whisker-5987859cb8-km2ch", "timestamp":"2025-07-06 23:29:01.105373285 +0000 UTC"}, Hostname:"ip-172-31-24-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:29:01.259685 containerd[2034]: 2025-07-06 23:29:01.105 [INFO][4549] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:29:01.259685 containerd[2034]: 2025-07-06 23:29:01.105 [INFO][4549] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:29:01.259685 containerd[2034]: 2025-07-06 23:29:01.106 [INFO][4549] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-125' Jul 6 23:29:01.259685 containerd[2034]: 2025-07-06 23:29:01.119 [INFO][4549] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" host="ip-172-31-24-125" Jul 6 23:29:01.259685 containerd[2034]: 2025-07-06 23:29:01.134 [INFO][4549] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-125" Jul 6 23:29:01.259685 containerd[2034]: 2025-07-06 23:29:01.144 [INFO][4549] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:01.259685 containerd[2034]: 2025-07-06 23:29:01.148 [INFO][4549] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:01.259685 containerd[2034]: 2025-07-06 23:29:01.153 [INFO][4549] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:01.260636 containerd[2034]: 2025-07-06 23:29:01.153 [INFO][4549] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" host="ip-172-31-24-125" Jul 6 23:29:01.260636 containerd[2034]: 2025-07-06 23:29:01.158 [INFO][4549] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105 Jul 6 23:29:01.260636 containerd[2034]: 2025-07-06 23:29:01.170 [INFO][4549] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" host="ip-172-31-24-125" Jul 6 23:29:01.260636 containerd[2034]: 2025-07-06 23:29:01.185 [INFO][4549] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.1/26] block=192.168.35.0/26 handle="k8s-pod-network.32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" host="ip-172-31-24-125" Jul 6 23:29:01.260636 containerd[2034]: 2025-07-06 23:29:01.185 [INFO][4549] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.1/26] handle="k8s-pod-network.32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" host="ip-172-31-24-125" Jul 6 23:29:01.260636 containerd[2034]: 2025-07-06 23:29:01.185 [INFO][4549] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:29:01.260636 containerd[2034]: 2025-07-06 23:29:01.185 [INFO][4549] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.1/26] IPv6=[] ContainerID="32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" HandleID="k8s-pod-network.32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" Workload="ip--172--31--24--125-k8s-whisker--5987859cb8--km2ch-eth0" Jul 6 23:29:01.261249 containerd[2034]: 2025-07-06 23:29:01.198 [INFO][4537] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" Namespace="calico-system" Pod="whisker-5987859cb8-km2ch" WorkloadEndpoint="ip--172--31--24--125-k8s-whisker--5987859cb8--km2ch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--125-k8s-whisker--5987859cb8--km2ch-eth0", GenerateName:"whisker-5987859cb8-", Namespace:"calico-system", SelfLink:"", UID:"681634e7-eb51-4901-b83b-07026ff8db16", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5987859cb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-125", ContainerID:"", Pod:"whisker-5987859cb8-km2ch", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali704e56fa810", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:01.261249 containerd[2034]: 2025-07-06 23:29:01.199 [INFO][4537] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.1/32] ContainerID="32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" Namespace="calico-system" Pod="whisker-5987859cb8-km2ch" WorkloadEndpoint="ip--172--31--24--125-k8s-whisker--5987859cb8--km2ch-eth0" Jul 6 23:29:01.261516 containerd[2034]: 2025-07-06 23:29:01.199 [INFO][4537] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali704e56fa810 ContainerID="32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" Namespace="calico-system" Pod="whisker-5987859cb8-km2ch" WorkloadEndpoint="ip--172--31--24--125-k8s-whisker--5987859cb8--km2ch-eth0" Jul 6 23:29:01.261516 containerd[2034]: 2025-07-06 23:29:01.225 [INFO][4537] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" Namespace="calico-system" Pod="whisker-5987859cb8-km2ch" WorkloadEndpoint="ip--172--31--24--125-k8s-whisker--5987859cb8--km2ch-eth0" Jul 6 23:29:01.261681 containerd[2034]: 2025-07-06 23:29:01.225 [INFO][4537] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" Namespace="calico-system" Pod="whisker-5987859cb8-km2ch" WorkloadEndpoint="ip--172--31--24--125-k8s-whisker--5987859cb8--km2ch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--125-k8s-whisker--5987859cb8--km2ch-eth0", GenerateName:"whisker-5987859cb8-", Namespace:"calico-system", SelfLink:"", UID:"681634e7-eb51-4901-b83b-07026ff8db16", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5987859cb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-125", ContainerID:"32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105", Pod:"whisker-5987859cb8-km2ch", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali704e56fa810", MAC:"c2:25:78:02:71:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:01.261859 containerd[2034]: 2025-07-06 23:29:01.252 [INFO][4537] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" Namespace="calico-system" Pod="whisker-5987859cb8-km2ch" WorkloadEndpoint="ip--172--31--24--125-k8s-whisker--5987859cb8--km2ch-eth0" Jul 6 23:29:01.344311 containerd[2034]: time="2025-07-06T23:29:01.344187266Z" level=info msg="connecting to shim 32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105" address="unix:///run/containerd/s/9da597a77b2ccac5ff6a21cd9ab01c8fc15a47b9fb8ac025dd171cc562d33d0d" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:29:01.389271 containerd[2034]: time="2025-07-06T23:29:01.389107718Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12bb4e5a2182d80b0ed0e51d226ee2740e009426df04eb1fb19344aef1596423\" id:\"b6b2c6693d38e3ec0ecf59932a786cf9ee0784eca3dbe2cbf2cdc9594e11cccc\" pid:4568 exit_status:1 exited_at:{seconds:1751844541 nanos:387667190}" Jul 6 23:29:01.412233 systemd[1]: Started cri-containerd-32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105.scope - libcontainer container 32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105. Jul 6 23:29:01.485889 containerd[2034]: time="2025-07-06T23:29:01.485578226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5987859cb8-km2ch,Uid:681634e7-eb51-4901-b83b-07026ff8db16,Namespace:calico-system,Attempt:0,} returns sandbox id \"32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105\"" Jul 6 23:29:01.489813 containerd[2034]: time="2025-07-06T23:29:01.489733563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:29:01.762982 kubelet[3522]: I0706 23:29:01.762867 3522 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c466df60-ba1c-453f-9253-7f09b565b994" path="/var/lib/kubelet/pods/c466df60-ba1c-453f-9253-7f09b565b994/volumes" Jul 6 23:29:02.879668 systemd-networkd[1820]: cali704e56fa810: Gained IPv6LL Jul 6 23:29:03.089247 systemd-networkd[1820]: vxlan.calico: Link UP Jul 6 23:29:03.089267 systemd-networkd[1820]: vxlan.calico: Gained carrier Jul 6 23:29:03.133526 (udev-worker)[4484]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:29:03.682558 containerd[2034]: time="2025-07-06T23:29:03.682479365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:03.684696 containerd[2034]: time="2025-07-06T23:29:03.684495161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 6 23:29:03.686922 containerd[2034]: time="2025-07-06T23:29:03.686850569Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:03.692087 containerd[2034]: time="2025-07-06T23:29:03.692001521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:03.693579 containerd[2034]: time="2025-07-06T23:29:03.693348317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 2.203552594s" Jul 6 23:29:03.693579 containerd[2034]: time="2025-07-06T23:29:03.693404741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 6 23:29:03.701721 containerd[2034]: time="2025-07-06T23:29:03.701391366Z" level=info msg="CreateContainer within sandbox \"32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:29:03.719620 containerd[2034]: time="2025-07-06T23:29:03.719567334Z" level=info msg="Container b18bad7fd306b8df060c7a2450ce9e869785b26829ab145e9b644dd4cb67b2c4: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:03.729755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611726964.mount: Deactivated successfully. Jul 6 23:29:03.747634 containerd[2034]: time="2025-07-06T23:29:03.747485070Z" level=info msg="CreateContainer within sandbox \"32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b18bad7fd306b8df060c7a2450ce9e869785b26829ab145e9b644dd4cb67b2c4\"" Jul 6 23:29:03.749283 containerd[2034]: time="2025-07-06T23:29:03.749194326Z" level=info msg="StartContainer for \"b18bad7fd306b8df060c7a2450ce9e869785b26829ab145e9b644dd4cb67b2c4\"" Jul 6 23:29:03.752343 containerd[2034]: time="2025-07-06T23:29:03.752276634Z" level=info msg="connecting to shim b18bad7fd306b8df060c7a2450ce9e869785b26829ab145e9b644dd4cb67b2c4" address="unix:///run/containerd/s/9da597a77b2ccac5ff6a21cd9ab01c8fc15a47b9fb8ac025dd171cc562d33d0d" protocol=ttrpc version=3 Jul 6 23:29:03.798242 systemd[1]: Started cri-containerd-b18bad7fd306b8df060c7a2450ce9e869785b26829ab145e9b644dd4cb67b2c4.scope - libcontainer container b18bad7fd306b8df060c7a2450ce9e869785b26829ab145e9b644dd4cb67b2c4. Jul 6 23:29:03.899191 containerd[2034]: time="2025-07-06T23:29:03.899108010Z" level=info msg="StartContainer for \"b18bad7fd306b8df060c7a2450ce9e869785b26829ab145e9b644dd4cb67b2c4\" returns successfully" Jul 6 23:29:03.903719 containerd[2034]: time="2025-07-06T23:29:03.903586375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:29:04.223681 systemd-networkd[1820]: vxlan.calico: Gained IPv6LL Jul 6 23:29:04.758369 containerd[2034]: time="2025-07-06T23:29:04.758270755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7qxkc,Uid:6acec3dd-3f28-47f0-aa8c-d062fd8a3781,Namespace:calico-system,Attempt:0,}" Jul 6 23:29:04.761465 containerd[2034]: time="2025-07-06T23:29:04.761256823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7c4tq,Uid:28268df8-9281-4f79-a130-45c4535d7f25,Namespace:kube-system,Attempt:0,}" Jul 6 23:29:05.036482 systemd[1]: Started sshd@9-172.31.24.125:22-139.178.89.65:34826.service - OpenSSH per-connection server daemon (139.178.89.65:34826). Jul 6 23:29:05.201993 systemd-networkd[1820]: cali580864c4c61: Link UP Jul 6 23:29:05.204260 systemd-networkd[1820]: cali580864c4c61: Gained carrier Jul 6 23:29:05.247178 containerd[2034]: 2025-07-06 23:29:04.988 [INFO][4864] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--125-k8s-csi--node--driver--7qxkc-eth0 csi-node-driver- calico-system 6acec3dd-3f28-47f0-aa8c-d062fd8a3781 723 0 2025-07-06 23:28:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-24-125 csi-node-driver-7qxkc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali580864c4c61 [] [] }} ContainerID="31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" Namespace="calico-system" Pod="csi-node-driver-7qxkc" WorkloadEndpoint="ip--172--31--24--125-k8s-csi--node--driver--7qxkc-" Jul 6 23:29:05.247178 containerd[2034]: 2025-07-06 23:29:04.989 [INFO][4864] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" Namespace="calico-system" Pod="csi-node-driver-7qxkc" WorkloadEndpoint="ip--172--31--24--125-k8s-csi--node--driver--7qxkc-eth0" Jul 6 23:29:05.247178 containerd[2034]: 2025-07-06 23:29:05.097 [INFO][4893] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" HandleID="k8s-pod-network.31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" Workload="ip--172--31--24--125-k8s-csi--node--driver--7qxkc-eth0" Jul 6 23:29:05.247482 containerd[2034]: 2025-07-06 23:29:05.097 [INFO][4893] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" HandleID="k8s-pod-network.31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" Workload="ip--172--31--24--125-k8s-csi--node--driver--7qxkc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cb80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-125", "pod":"csi-node-driver-7qxkc", "timestamp":"2025-07-06 23:29:05.096983716 +0000 UTC"}, Hostname:"ip-172-31-24-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:29:05.247482 containerd[2034]: 2025-07-06 23:29:05.097 [INFO][4893] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:29:05.247482 containerd[2034]: 2025-07-06 23:29:05.097 [INFO][4893] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:29:05.247482 containerd[2034]: 2025-07-06 23:29:05.097 [INFO][4893] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-125' Jul 6 23:29:05.247482 containerd[2034]: 2025-07-06 23:29:05.122 [INFO][4893] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" host="ip-172-31-24-125" Jul 6 23:29:05.247482 containerd[2034]: 2025-07-06 23:29:05.139 [INFO][4893] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-125" Jul 6 23:29:05.247482 containerd[2034]: 2025-07-06 23:29:05.147 [INFO][4893] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:05.247482 containerd[2034]: 2025-07-06 23:29:05.151 [INFO][4893] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:05.247482 containerd[2034]: 2025-07-06 23:29:05.156 [INFO][4893] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:05.247482 containerd[2034]: 2025-07-06 23:29:05.157 [INFO][4893] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" host="ip-172-31-24-125" Jul 6 23:29:05.249150 containerd[2034]: 2025-07-06 23:29:05.160 [INFO][4893] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948 Jul 6 23:29:05.249150 containerd[2034]: 2025-07-06 23:29:05.167 [INFO][4893] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" host="ip-172-31-24-125" Jul 6 23:29:05.249150 containerd[2034]: 2025-07-06 23:29:05.181 [INFO][4893] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.2/26] block=192.168.35.0/26 handle="k8s-pod-network.31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" host="ip-172-31-24-125" Jul 6 23:29:05.249150 containerd[2034]: 2025-07-06 23:29:05.181 [INFO][4893] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.2/26] handle="k8s-pod-network.31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" host="ip-172-31-24-125" Jul 6 23:29:05.249150 containerd[2034]: 2025-07-06 23:29:05.183 [INFO][4893] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:29:05.249150 containerd[2034]: 2025-07-06 23:29:05.183 [INFO][4893] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.2/26] IPv6=[] ContainerID="31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" HandleID="k8s-pod-network.31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" Workload="ip--172--31--24--125-k8s-csi--node--driver--7qxkc-eth0" Jul 6 23:29:05.249444 containerd[2034]: 2025-07-06 23:29:05.189 [INFO][4864] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" Namespace="calico-system" Pod="csi-node-driver-7qxkc" WorkloadEndpoint="ip--172--31--24--125-k8s-csi--node--driver--7qxkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--125-k8s-csi--node--driver--7qxkc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6acec3dd-3f28-47f0-aa8c-d062fd8a3781", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-125", ContainerID:"", Pod:"csi-node-driver-7qxkc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali580864c4c61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:05.249574 containerd[2034]: 2025-07-06 23:29:05.190 [INFO][4864] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.2/32] ContainerID="31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" Namespace="calico-system" Pod="csi-node-driver-7qxkc" WorkloadEndpoint="ip--172--31--24--125-k8s-csi--node--driver--7qxkc-eth0" Jul 6 23:29:05.249574 containerd[2034]: 2025-07-06 23:29:05.190 [INFO][4864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali580864c4c61 ContainerID="31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" Namespace="calico-system" Pod="csi-node-driver-7qxkc" WorkloadEndpoint="ip--172--31--24--125-k8s-csi--node--driver--7qxkc-eth0" Jul 6 23:29:05.249574 containerd[2034]: 2025-07-06 23:29:05.207 [INFO][4864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" Namespace="calico-system" Pod="csi-node-driver-7qxkc" WorkloadEndpoint="ip--172--31--24--125-k8s-csi--node--driver--7qxkc-eth0" Jul 6 23:29:05.249725 containerd[2034]: 2025-07-06 23:29:05.210 [INFO][4864] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" Namespace="calico-system" Pod="csi-node-driver-7qxkc" WorkloadEndpoint="ip--172--31--24--125-k8s-csi--node--driver--7qxkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--125-k8s-csi--node--driver--7qxkc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6acec3dd-3f28-47f0-aa8c-d062fd8a3781", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-125", ContainerID:"31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948", Pod:"csi-node-driver-7qxkc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali580864c4c61", MAC:"26:b3:9a:da:6d:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:05.249836 containerd[2034]: 2025-07-06 23:29:05.238 [INFO][4864] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" Namespace="calico-system" Pod="csi-node-driver-7qxkc" WorkloadEndpoint="ip--172--31--24--125-k8s-csi--node--driver--7qxkc-eth0" Jul 6 23:29:05.311168 sshd[4899]: Accepted publickey for core from 139.178.89.65 port 34826 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:05.322016 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:05.344680 containerd[2034]: time="2025-07-06T23:29:05.340780734Z" level=info msg="connecting to shim 31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948" address="unix:///run/containerd/s/3ef68d375dc5ce32111ea6ec71012464b06fe7635993192b343485863115d11f" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:29:05.348077 systemd-logind[2000]: New session 10 of user core. Jul 6 23:29:05.354728 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:29:05.393018 systemd-networkd[1820]: cali7cd57f301f6: Link UP Jul 6 23:29:05.397186 systemd-networkd[1820]: cali7cd57f301f6: Gained carrier Jul 6 23:29:05.434326 systemd[1]: Started cri-containerd-31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948.scope - libcontainer container 31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948. Jul 6 23:29:05.449969 containerd[2034]: 2025-07-06 23:29:04.999 [INFO][4872] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--125-k8s-coredns--668d6bf9bc--7c4tq-eth0 coredns-668d6bf9bc- kube-system 28268df8-9281-4f79-a130-45c4535d7f25 846 0 2025-07-06 23:28:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-125 coredns-668d6bf9bc-7c4tq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7cd57f301f6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" Namespace="kube-system" Pod="coredns-668d6bf9bc-7c4tq" WorkloadEndpoint="ip--172--31--24--125-k8s-coredns--668d6bf9bc--7c4tq-" Jul 6 23:29:05.449969 containerd[2034]: 2025-07-06 23:29:05.000 [INFO][4872] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" Namespace="kube-system" Pod="coredns-668d6bf9bc-7c4tq" WorkloadEndpoint="ip--172--31--24--125-k8s-coredns--668d6bf9bc--7c4tq-eth0" Jul 6 23:29:05.449969 containerd[2034]: 2025-07-06 23:29:05.137 [INFO][4898] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" HandleID="k8s-pod-network.c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" Workload="ip--172--31--24--125-k8s-coredns--668d6bf9bc--7c4tq-eth0" Jul 6 23:29:05.450365 containerd[2034]: 2025-07-06 23:29:05.137 [INFO][4898] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" HandleID="k8s-pod-network.c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" Workload="ip--172--31--24--125-k8s-coredns--668d6bf9bc--7c4tq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000122150), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-125", "pod":"coredns-668d6bf9bc-7c4tq", "timestamp":"2025-07-06 23:29:05.137184617 +0000 UTC"}, Hostname:"ip-172-31-24-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:29:05.450365 containerd[2034]: 2025-07-06 23:29:05.137 [INFO][4898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:29:05.450365 containerd[2034]: 2025-07-06 23:29:05.183 [INFO][4898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:29:05.450365 containerd[2034]: 2025-07-06 23:29:05.183 [INFO][4898] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-125' Jul 6 23:29:05.450365 containerd[2034]: 2025-07-06 23:29:05.224 [INFO][4898] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" host="ip-172-31-24-125" Jul 6 23:29:05.450365 containerd[2034]: 2025-07-06 23:29:05.250 [INFO][4898] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-125" Jul 6 23:29:05.450365 containerd[2034]: 2025-07-06 23:29:05.268 [INFO][4898] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:05.450365 containerd[2034]: 2025-07-06 23:29:05.273 [INFO][4898] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:05.450365 containerd[2034]: 2025-07-06 23:29:05.281 [INFO][4898] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:05.450851 containerd[2034]: 2025-07-06 23:29:05.283 [INFO][4898] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" host="ip-172-31-24-125" Jul 6 23:29:05.450851 containerd[2034]: 2025-07-06 23:29:05.285 [INFO][4898] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26 Jul 6 23:29:05.450851 containerd[2034]: 2025-07-06 23:29:05.300 [INFO][4898] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" host="ip-172-31-24-125" Jul 6 23:29:05.450851 containerd[2034]: 2025-07-06 23:29:05.345 [INFO][4898] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.3/26] block=192.168.35.0/26 handle="k8s-pod-network.c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" host="ip-172-31-24-125" Jul 6 23:29:05.450851 containerd[2034]: 2025-07-06 23:29:05.346 [INFO][4898] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.3/26] handle="k8s-pod-network.c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" host="ip-172-31-24-125" Jul 6 23:29:05.450851 containerd[2034]: 2025-07-06 23:29:05.346 [INFO][4898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:29:05.450851 containerd[2034]: 2025-07-06 23:29:05.346 [INFO][4898] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.3/26] IPv6=[] ContainerID="c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" HandleID="k8s-pod-network.c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" Workload="ip--172--31--24--125-k8s-coredns--668d6bf9bc--7c4tq-eth0" Jul 6 23:29:05.451257 containerd[2034]: 2025-07-06 23:29:05.366 [INFO][4872] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" Namespace="kube-system" Pod="coredns-668d6bf9bc-7c4tq" WorkloadEndpoint="ip--172--31--24--125-k8s-coredns--668d6bf9bc--7c4tq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--125-k8s-coredns--668d6bf9bc--7c4tq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"28268df8-9281-4f79-a130-45c4535d7f25", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-125", ContainerID:"", Pod:"coredns-668d6bf9bc-7c4tq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cd57f301f6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:05.451257 containerd[2034]: 2025-07-06 23:29:05.367 [INFO][4872] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.3/32] ContainerID="c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" Namespace="kube-system" Pod="coredns-668d6bf9bc-7c4tq" WorkloadEndpoint="ip--172--31--24--125-k8s-coredns--668d6bf9bc--7c4tq-eth0" Jul 6 23:29:05.451257 containerd[2034]: 2025-07-06 23:29:05.367 [INFO][4872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cd57f301f6 ContainerID="c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" Namespace="kube-system" Pod="coredns-668d6bf9bc-7c4tq" WorkloadEndpoint="ip--172--31--24--125-k8s-coredns--668d6bf9bc--7c4tq-eth0" Jul 6 23:29:05.451257 containerd[2034]: 2025-07-06 23:29:05.401 [INFO][4872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" Namespace="kube-system" Pod="coredns-668d6bf9bc-7c4tq" WorkloadEndpoint="ip--172--31--24--125-k8s-coredns--668d6bf9bc--7c4tq-eth0" Jul 6 23:29:05.451257 containerd[2034]: 2025-07-06 23:29:05.407 [INFO][4872] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" Namespace="kube-system" Pod="coredns-668d6bf9bc-7c4tq" WorkloadEndpoint="ip--172--31--24--125-k8s-coredns--668d6bf9bc--7c4tq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--125-k8s-coredns--668d6bf9bc--7c4tq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"28268df8-9281-4f79-a130-45c4535d7f25", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-125", ContainerID:"c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26", Pod:"coredns-668d6bf9bc-7c4tq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cd57f301f6", MAC:"4a:a0:b1:33:6d:53", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:05.451257 containerd[2034]: 2025-07-06 23:29:05.436 [INFO][4872] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" Namespace="kube-system" Pod="coredns-668d6bf9bc-7c4tq" WorkloadEndpoint="ip--172--31--24--125-k8s-coredns--668d6bf9bc--7c4tq-eth0" Jul 6 23:29:05.535387 containerd[2034]: time="2025-07-06T23:29:05.535227859Z" level=info msg="connecting to shim c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26" address="unix:///run/containerd/s/3b3c9652bd2ca40366d339e566963b61cf6f85173242d22bc490fa1b8257a49d" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:29:05.624377 systemd[1]: Started cri-containerd-c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26.scope - libcontainer container c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26. Jul 6 23:29:05.765463 containerd[2034]: time="2025-07-06T23:29:05.765414224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6777f4cb5-jqnmg,Uid:715b37dd-4c55-4c71-8494-cc2f493772ba,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:29:05.811823 containerd[2034]: time="2025-07-06T23:29:05.811112768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7qxkc,Uid:6acec3dd-3f28-47f0-aa8c-d062fd8a3781,Namespace:calico-system,Attempt:0,} returns sandbox id \"31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948\"" Jul 6 23:29:05.923934 containerd[2034]: time="2025-07-06T23:29:05.923683437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7c4tq,Uid:28268df8-9281-4f79-a130-45c4535d7f25,Namespace:kube-system,Attempt:0,} returns sandbox id \"c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26\"" Jul 6 23:29:05.943685 containerd[2034]: time="2025-07-06T23:29:05.943457517Z" level=info msg="CreateContainer within sandbox \"c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:29:05.971339 sshd[4936]: Connection closed by 139.178.89.65 port 34826 Jul 6 23:29:05.973676 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:05.987520 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:29:05.990523 systemd[1]: sshd@9-172.31.24.125:22-139.178.89.65:34826.service: Deactivated successfully. Jul 6 23:29:06.011059 systemd-logind[2000]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:29:06.032980 containerd[2034]: time="2025-07-06T23:29:06.031196453Z" level=info msg="Container 7684c591d287911a221de5e5a5572759fc12c78765dadaa386ed32f76eb3a9ae: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:06.041002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3183691157.mount: Deactivated successfully. Jul 6 23:29:06.050673 systemd-logind[2000]: Removed session 10. Jul 6 23:29:06.081149 containerd[2034]: time="2025-07-06T23:29:06.081065177Z" level=info msg="CreateContainer within sandbox \"c496469e159a373f8cc99c9d83eb6fc345cc53ad0a0b2a49db6c360fb7eaff26\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7684c591d287911a221de5e5a5572759fc12c78765dadaa386ed32f76eb3a9ae\"" Jul 6 23:29:06.085279 containerd[2034]: time="2025-07-06T23:29:06.082909421Z" level=info msg="StartContainer for \"7684c591d287911a221de5e5a5572759fc12c78765dadaa386ed32f76eb3a9ae\"" Jul 6 23:29:06.096386 containerd[2034]: time="2025-07-06T23:29:06.096314729Z" level=info msg="connecting to shim 7684c591d287911a221de5e5a5572759fc12c78765dadaa386ed32f76eb3a9ae" address="unix:///run/containerd/s/3b3c9652bd2ca40366d339e566963b61cf6f85173242d22bc490fa1b8257a49d" protocol=ttrpc version=3 Jul 6 23:29:06.247606 systemd[1]: Started cri-containerd-7684c591d287911a221de5e5a5572759fc12c78765dadaa386ed32f76eb3a9ae.scope - libcontainer container 7684c591d287911a221de5e5a5572759fc12c78765dadaa386ed32f76eb3a9ae. Jul 6 23:29:06.442144 containerd[2034]: time="2025-07-06T23:29:06.442067863Z" level=info msg="StartContainer for \"7684c591d287911a221de5e5a5572759fc12c78765dadaa386ed32f76eb3a9ae\" returns successfully" Jul 6 23:29:06.551281 systemd-networkd[1820]: calia306c17be05: Link UP Jul 6 23:29:06.556908 systemd-networkd[1820]: calia306c17be05: Gained carrier Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.118 [INFO][5021] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--jqnmg-eth0 calico-apiserver-6777f4cb5- calico-apiserver 715b37dd-4c55-4c71-8494-cc2f493772ba 858 0 2025-07-06 23:28:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6777f4cb5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-125 calico-apiserver-6777f4cb5-jqnmg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia306c17be05 [] [] }} ContainerID="5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" Namespace="calico-apiserver" Pod="calico-apiserver-6777f4cb5-jqnmg" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--jqnmg-" Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.120 [INFO][5021] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" Namespace="calico-apiserver" Pod="calico-apiserver-6777f4cb5-jqnmg" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--jqnmg-eth0" Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.326 [INFO][5054] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" HandleID="k8s-pod-network.5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" Workload="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--jqnmg-eth0" Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.326 [INFO][5054] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" HandleID="k8s-pod-network.5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" Workload="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--jqnmg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000315ca0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-125", "pod":"calico-apiserver-6777f4cb5-jqnmg", "timestamp":"2025-07-06 23:29:06.325971931 +0000 UTC"}, Hostname:"ip-172-31-24-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.326 [INFO][5054] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.326 [INFO][5054] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.326 [INFO][5054] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-125' Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.370 [INFO][5054] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" host="ip-172-31-24-125" Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.383 [INFO][5054] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-125" Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.404 [INFO][5054] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.414 [INFO][5054] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.435 [INFO][5054] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.437 [INFO][5054] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" host="ip-172-31-24-125" Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.446 [INFO][5054] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.466 [INFO][5054] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" host="ip-172-31-24-125" Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.519 [INFO][5054] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.4/26] block=192.168.35.0/26 handle="k8s-pod-network.5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" host="ip-172-31-24-125" Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.519 [INFO][5054] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.4/26] handle="k8s-pod-network.5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" host="ip-172-31-24-125" Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.519 [INFO][5054] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:29:06.654385 containerd[2034]: 2025-07-06 23:29:06.519 [INFO][5054] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.4/26] IPv6=[] ContainerID="5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" HandleID="k8s-pod-network.5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" Workload="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--jqnmg-eth0" Jul 6 23:29:06.657670 containerd[2034]: 2025-07-06 23:29:06.534 [INFO][5021] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" Namespace="calico-apiserver" Pod="calico-apiserver-6777f4cb5-jqnmg" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--jqnmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--jqnmg-eth0", GenerateName:"calico-apiserver-6777f4cb5-", Namespace:"calico-apiserver", SelfLink:"", UID:"715b37dd-4c55-4c71-8494-cc2f493772ba", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6777f4cb5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-125", ContainerID:"", Pod:"calico-apiserver-6777f4cb5-jqnmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia306c17be05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:06.657670 containerd[2034]: 2025-07-06 23:29:06.534 [INFO][5021] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.4/32] ContainerID="5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" Namespace="calico-apiserver" Pod="calico-apiserver-6777f4cb5-jqnmg" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--jqnmg-eth0" Jul 6 23:29:06.657670 containerd[2034]: 2025-07-06 23:29:06.534 [INFO][5021] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia306c17be05 ContainerID="5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" Namespace="calico-apiserver" Pod="calico-apiserver-6777f4cb5-jqnmg" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--jqnmg-eth0" Jul 6 23:29:06.657670 containerd[2034]: 2025-07-06 23:29:06.560 [INFO][5021] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" Namespace="calico-apiserver" Pod="calico-apiserver-6777f4cb5-jqnmg" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--jqnmg-eth0" Jul 6 23:29:06.657670 containerd[2034]: 2025-07-06 23:29:06.564 [INFO][5021] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" Namespace="calico-apiserver" Pod="calico-apiserver-6777f4cb5-jqnmg" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--jqnmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--jqnmg-eth0", GenerateName:"calico-apiserver-6777f4cb5-", Namespace:"calico-apiserver", SelfLink:"", UID:"715b37dd-4c55-4c71-8494-cc2f493772ba", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6777f4cb5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-125", ContainerID:"5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b", Pod:"calico-apiserver-6777f4cb5-jqnmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia306c17be05", MAC:"62:41:3c:43:8a:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:06.657670 containerd[2034]: 2025-07-06 23:29:06.640 [INFO][5021] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" Namespace="calico-apiserver" Pod="calico-apiserver-6777f4cb5-jqnmg" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--jqnmg-eth0" Jul 6 23:29:06.655736 systemd-networkd[1820]: cali7cd57f301f6: Gained IPv6LL Jul 6 23:29:06.719604 systemd-networkd[1820]: cali580864c4c61: Gained IPv6LL Jul 6 23:29:06.765106 containerd[2034]: time="2025-07-06T23:29:06.765020229Z" level=info msg="connecting to shim 5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b" address="unix:///run/containerd/s/6ce91a6b4b18683dba221d7b90ee36dd1e7c151dcfe6f432406e0b4189469eba" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:29:06.910411 systemd[1]: Started cri-containerd-5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b.scope - libcontainer container 5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b. Jul 6 23:29:07.238812 kubelet[3522]: I0706 23:29:07.238719 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7c4tq" podStartSLOduration=50.238692055 podStartE2EDuration="50.238692055s" podCreationTimestamp="2025-07-06 23:28:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:29:07.234430267 +0000 UTC m=+55.743744074" watchObservedRunningTime="2025-07-06 23:29:07.238692055 +0000 UTC m=+55.748005730" Jul 6 23:29:07.450971 containerd[2034]: time="2025-07-06T23:29:07.450873428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6777f4cb5-jqnmg,Uid:715b37dd-4c55-4c71-8494-cc2f493772ba,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b\"" Jul 6 23:29:07.761080 containerd[2034]: time="2025-07-06T23:29:07.759149062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4898,Uid:e0a04f64-8a0b-40ef-9fb0-940a3feff5bc,Namespace:kube-system,Attempt:0,}" Jul 6 23:29:07.761080 containerd[2034]: time="2025-07-06T23:29:07.759524578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-kpgkp,Uid:fa2c4d51-a0cf-4405-85e5-c4308819e470,Namespace:calico-system,Attempt:0,}" Jul 6 23:29:07.768399 containerd[2034]: time="2025-07-06T23:29:07.768301546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb89dfdd6-4n8l4,Uid:ddd89310-58db-47b0-a7b4-d9cde8e0d91b,Namespace:calico-system,Attempt:0,}" Jul 6 23:29:08.252237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2329080138.mount: Deactivated successfully. Jul 6 23:29:08.258163 systemd-networkd[1820]: calia306c17be05: Gained IPv6LL Jul 6 23:29:08.313310 containerd[2034]: time="2025-07-06T23:29:08.313242464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:08.317864 containerd[2034]: time="2025-07-06T23:29:08.317797820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 6 23:29:08.321052 containerd[2034]: time="2025-07-06T23:29:08.320973284Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:08.342036 containerd[2034]: time="2025-07-06T23:29:08.341111817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:08.345336 containerd[2034]: time="2025-07-06T23:29:08.345251457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 4.441532494s" Jul 6 23:29:08.345336 containerd[2034]: time="2025-07-06T23:29:08.345336333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 6 23:29:08.350670 containerd[2034]: time="2025-07-06T23:29:08.350386269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:29:08.356242 containerd[2034]: time="2025-07-06T23:29:08.356165133Z" level=info msg="CreateContainer within sandbox \"32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:29:08.377242 containerd[2034]: time="2025-07-06T23:29:08.377162109Z" level=info msg="Container 7d17714064fc2a3ea9069726c7405ab2d57ccc31243332faf2b3041133e2711a: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:08.413538 systemd-networkd[1820]: calibbf6b95a3b7: Link UP Jul 6 23:29:08.418189 systemd-networkd[1820]: calibbf6b95a3b7: Gained carrier Jul 6 23:29:08.454825 containerd[2034]: time="2025-07-06T23:29:08.454278717Z" level=info msg="CreateContainer within sandbox \"32b584c5b41ab793a84d6c176094aa1cb8baa35a8ded9f7c4c33944d0e564105\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7d17714064fc2a3ea9069726c7405ab2d57ccc31243332faf2b3041133e2711a\"" Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.032 [INFO][5143] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--125-k8s-coredns--668d6bf9bc--g4898-eth0 coredns-668d6bf9bc- kube-system e0a04f64-8a0b-40ef-9fb0-940a3feff5bc 857 0 2025-07-06 23:28:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-125 coredns-668d6bf9bc-g4898 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibbf6b95a3b7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4898" WorkloadEndpoint="ip--172--31--24--125-k8s-coredns--668d6bf9bc--g4898-" Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.033 [INFO][5143] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4898" WorkloadEndpoint="ip--172--31--24--125-k8s-coredns--668d6bf9bc--g4898-eth0" Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.217 [INFO][5184] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" HandleID="k8s-pod-network.4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" Workload="ip--172--31--24--125-k8s-coredns--668d6bf9bc--g4898-eth0" Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.218 [INFO][5184] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" HandleID="k8s-pod-network.4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" Workload="ip--172--31--24--125-k8s-coredns--668d6bf9bc--g4898-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400014df40), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-125", "pod":"coredns-668d6bf9bc-g4898", "timestamp":"2025-07-06 23:29:08.21752234 +0000 UTC"}, Hostname:"ip-172-31-24-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.222 [INFO][5184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.222 [INFO][5184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.222 [INFO][5184] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-125' Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.272 [INFO][5184] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" host="ip-172-31-24-125" Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.300 [INFO][5184] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-125" Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.316 [INFO][5184] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.327 [INFO][5184] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.337 [INFO][5184] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.339 [INFO][5184] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" host="ip-172-31-24-125" Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.348 [INFO][5184] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41 Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.367 [INFO][5184] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" host="ip-172-31-24-125" Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.396 [INFO][5184] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.5/26] block=192.168.35.0/26 handle="k8s-pod-network.4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" host="ip-172-31-24-125" Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.396 [INFO][5184] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.5/26] handle="k8s-pod-network.4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" host="ip-172-31-24-125" Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.397 [INFO][5184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:29:08.459145 containerd[2034]: 2025-07-06 23:29:08.398 [INFO][5184] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.5/26] IPv6=[] ContainerID="4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" HandleID="k8s-pod-network.4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" Workload="ip--172--31--24--125-k8s-coredns--668d6bf9bc--g4898-eth0" Jul 6 23:29:08.462867 containerd[2034]: 2025-07-06 23:29:08.405 [INFO][5143] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4898" WorkloadEndpoint="ip--172--31--24--125-k8s-coredns--668d6bf9bc--g4898-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--125-k8s-coredns--668d6bf9bc--g4898-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e0a04f64-8a0b-40ef-9fb0-940a3feff5bc", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-125", ContainerID:"", Pod:"coredns-668d6bf9bc-g4898", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbf6b95a3b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:08.462867 containerd[2034]: 2025-07-06 23:29:08.405 [INFO][5143] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.5/32] ContainerID="4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4898" WorkloadEndpoint="ip--172--31--24--125-k8s-coredns--668d6bf9bc--g4898-eth0" Jul 6 23:29:08.462867 containerd[2034]: 2025-07-06 23:29:08.405 [INFO][5143] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbf6b95a3b7 ContainerID="4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4898" WorkloadEndpoint="ip--172--31--24--125-k8s-coredns--668d6bf9bc--g4898-eth0" Jul 6 23:29:08.462867 containerd[2034]: 2025-07-06 23:29:08.415 [INFO][5143] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4898" WorkloadEndpoint="ip--172--31--24--125-k8s-coredns--668d6bf9bc--g4898-eth0" Jul 6 23:29:08.462867 containerd[2034]: 2025-07-06 23:29:08.416 [INFO][5143] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4898" WorkloadEndpoint="ip--172--31--24--125-k8s-coredns--668d6bf9bc--g4898-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--125-k8s-coredns--668d6bf9bc--g4898-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e0a04f64-8a0b-40ef-9fb0-940a3feff5bc", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-125", ContainerID:"4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41", Pod:"coredns-668d6bf9bc-g4898", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbf6b95a3b7", MAC:"52:8c:44:41:44:2e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:08.462867 containerd[2034]: 2025-07-06 23:29:08.449 [INFO][5143] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4898" WorkloadEndpoint="ip--172--31--24--125-k8s-coredns--668d6bf9bc--g4898-eth0" Jul 6 23:29:08.466384 containerd[2034]: time="2025-07-06T23:29:08.465676749Z" level=info msg="StartContainer for \"7d17714064fc2a3ea9069726c7405ab2d57ccc31243332faf2b3041133e2711a\"" Jul 6 23:29:08.472037 containerd[2034]: time="2025-07-06T23:29:08.471871725Z" level=info msg="connecting to shim 7d17714064fc2a3ea9069726c7405ab2d57ccc31243332faf2b3041133e2711a" address="unix:///run/containerd/s/9da597a77b2ccac5ff6a21cd9ab01c8fc15a47b9fb8ac025dd171cc562d33d0d" protocol=ttrpc version=3 Jul 6 23:29:08.535561 systemd[1]: Started cri-containerd-7d17714064fc2a3ea9069726c7405ab2d57ccc31243332faf2b3041133e2711a.scope - libcontainer container 7d17714064fc2a3ea9069726c7405ab2d57ccc31243332faf2b3041133e2711a. Jul 6 23:29:08.580924 containerd[2034]: time="2025-07-06T23:29:08.580587202Z" level=info msg="connecting to shim 4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41" address="unix:///run/containerd/s/2d3d65ae74c4c41f36f3238e1c00f074221b96d162e14c4c29ea57819136f60a" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:29:08.594350 systemd-networkd[1820]: calib1494a30910: Link UP Jul 6 23:29:08.605007 systemd-networkd[1820]: calib1494a30910: Gained carrier Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.071 [INFO][5157] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--125-k8s-calico--kube--controllers--5cb89dfdd6--4n8l4-eth0 calico-kube-controllers-5cb89dfdd6- calico-system ddd89310-58db-47b0-a7b4-d9cde8e0d91b 856 0 2025-07-06 23:28:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5cb89dfdd6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-24-125 calico-kube-controllers-5cb89dfdd6-4n8l4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib1494a30910 [] [] }} ContainerID="2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" Namespace="calico-system" Pod="calico-kube-controllers-5cb89dfdd6-4n8l4" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--kube--controllers--5cb89dfdd6--4n8l4-" Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.072 [INFO][5157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" Namespace="calico-system" Pod="calico-kube-controllers-5cb89dfdd6-4n8l4" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--kube--controllers--5cb89dfdd6--4n8l4-eth0" Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.233 [INFO][5192] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" HandleID="k8s-pod-network.2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" Workload="ip--172--31--24--125-k8s-calico--kube--controllers--5cb89dfdd6--4n8l4-eth0" Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.234 [INFO][5192] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" HandleID="k8s-pod-network.2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" Workload="ip--172--31--24--125-k8s-calico--kube--controllers--5cb89dfdd6--4n8l4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003237d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-125", "pod":"calico-kube-controllers-5cb89dfdd6-4n8l4", "timestamp":"2025-07-06 23:29:08.233200508 +0000 UTC"}, Hostname:"ip-172-31-24-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.234 [INFO][5192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.396 [INFO][5192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.396 [INFO][5192] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-125' Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.433 [INFO][5192] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" host="ip-172-31-24-125" Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.453 [INFO][5192] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-125" Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.476 [INFO][5192] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.493 [INFO][5192] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.502 [INFO][5192] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.502 [INFO][5192] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" host="ip-172-31-24-125" Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.507 [INFO][5192] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.525 [INFO][5192] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" host="ip-172-31-24-125" Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.555 [INFO][5192] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.6/26] block=192.168.35.0/26 handle="k8s-pod-network.2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" host="ip-172-31-24-125" Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.556 [INFO][5192] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.6/26] handle="k8s-pod-network.2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" host="ip-172-31-24-125" Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.556 [INFO][5192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:29:08.670272 containerd[2034]: 2025-07-06 23:29:08.556 [INFO][5192] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.6/26] IPv6=[] ContainerID="2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" HandleID="k8s-pod-network.2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" Workload="ip--172--31--24--125-k8s-calico--kube--controllers--5cb89dfdd6--4n8l4-eth0" Jul 6 23:29:08.671786 containerd[2034]: 2025-07-06 23:29:08.567 [INFO][5157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" Namespace="calico-system" Pod="calico-kube-controllers-5cb89dfdd6-4n8l4" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--kube--controllers--5cb89dfdd6--4n8l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--125-k8s-calico--kube--controllers--5cb89dfdd6--4n8l4-eth0", GenerateName:"calico-kube-controllers-5cb89dfdd6-", Namespace:"calico-system", SelfLink:"", UID:"ddd89310-58db-47b0-a7b4-d9cde8e0d91b", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cb89dfdd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-125", ContainerID:"", Pod:"calico-kube-controllers-5cb89dfdd6-4n8l4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib1494a30910", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:08.671786 containerd[2034]: 2025-07-06 23:29:08.568 [INFO][5157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.6/32] ContainerID="2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" Namespace="calico-system" Pod="calico-kube-controllers-5cb89dfdd6-4n8l4" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--kube--controllers--5cb89dfdd6--4n8l4-eth0" Jul 6 23:29:08.671786 containerd[2034]: 2025-07-06 23:29:08.568 [INFO][5157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1494a30910 ContainerID="2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" Namespace="calico-system" Pod="calico-kube-controllers-5cb89dfdd6-4n8l4" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--kube--controllers--5cb89dfdd6--4n8l4-eth0" Jul 6 23:29:08.671786 containerd[2034]: 2025-07-06 23:29:08.615 [INFO][5157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" Namespace="calico-system" Pod="calico-kube-controllers-5cb89dfdd6-4n8l4" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--kube--controllers--5cb89dfdd6--4n8l4-eth0" Jul 6 23:29:08.671786 containerd[2034]: 2025-07-06 23:29:08.617 [INFO][5157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" Namespace="calico-system" Pod="calico-kube-controllers-5cb89dfdd6-4n8l4" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--kube--controllers--5cb89dfdd6--4n8l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--125-k8s-calico--kube--controllers--5cb89dfdd6--4n8l4-eth0", GenerateName:"calico-kube-controllers-5cb89dfdd6-", Namespace:"calico-system", SelfLink:"", UID:"ddd89310-58db-47b0-a7b4-d9cde8e0d91b", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cb89dfdd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-125", ContainerID:"2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da", Pod:"calico-kube-controllers-5cb89dfdd6-4n8l4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib1494a30910", MAC:"aa:d8:db:c8:a7:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:08.671786 containerd[2034]: 2025-07-06 23:29:08.649 [INFO][5157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" Namespace="calico-system" Pod="calico-kube-controllers-5cb89dfdd6-4n8l4" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--kube--controllers--5cb89dfdd6--4n8l4-eth0" Jul 6 23:29:08.697449 systemd[1]: Started cri-containerd-4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41.scope - libcontainer container 4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41. Jul 6 23:29:08.761616 containerd[2034]: time="2025-07-06T23:29:08.761216279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6777f4cb5-fz7lq,Uid:681b5493-6ec2-48d8-b1bd-05c7e34a77d0,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:29:08.824587 systemd-networkd[1820]: califec8ea072d8: Link UP Jul 6 23:29:08.837348 systemd-networkd[1820]: califec8ea072d8: Gained carrier Jul 6 23:29:08.881446 containerd[2034]: time="2025-07-06T23:29:08.881201279Z" level=info msg="connecting to shim 2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da" address="unix:///run/containerd/s/15ff2782a741861f228edc26c807a4e7cbc339617dbcfbc0bf926f45cb3f030f" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.066 [INFO][5146] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--125-k8s-goldmane--768f4c5c69--kpgkp-eth0 goldmane-768f4c5c69- calico-system fa2c4d51-a0cf-4405-85e5-c4308819e470 854 0 2025-07-06 23:28:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-24-125 goldmane-768f4c5c69-kpgkp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califec8ea072d8 [] [] }} ContainerID="83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" Namespace="calico-system" Pod="goldmane-768f4c5c69-kpgkp" WorkloadEndpoint="ip--172--31--24--125-k8s-goldmane--768f4c5c69--kpgkp-" Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.067 [INFO][5146] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" Namespace="calico-system" Pod="goldmane-768f4c5c69-kpgkp" WorkloadEndpoint="ip--172--31--24--125-k8s-goldmane--768f4c5c69--kpgkp-eth0" Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.271 [INFO][5190] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" HandleID="k8s-pod-network.83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" Workload="ip--172--31--24--125-k8s-goldmane--768f4c5c69--kpgkp-eth0" Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.272 [INFO][5190] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" HandleID="k8s-pod-network.83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" Workload="ip--172--31--24--125-k8s-goldmane--768f4c5c69--kpgkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038d170), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-125", "pod":"goldmane-768f4c5c69-kpgkp", "timestamp":"2025-07-06 23:29:08.271802048 +0000 UTC"}, Hostname:"ip-172-31-24-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.272 [INFO][5190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.556 [INFO][5190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.556 [INFO][5190] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-125' Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.619 [INFO][5190] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" host="ip-172-31-24-125" Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.637 [INFO][5190] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-125" Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.657 [INFO][5190] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.671 [INFO][5190] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.703 [INFO][5190] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.703 [INFO][5190] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" host="ip-172-31-24-125" Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.711 [INFO][5190] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.721 [INFO][5190] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" host="ip-172-31-24-125" Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.748 [INFO][5190] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.7/26] block=192.168.35.0/26 handle="k8s-pod-network.83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" host="ip-172-31-24-125" Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.751 [INFO][5190] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.7/26] handle="k8s-pod-network.83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" host="ip-172-31-24-125" Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.753 [INFO][5190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:29:08.924057 containerd[2034]: 2025-07-06 23:29:08.753 [INFO][5190] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.7/26] IPv6=[] ContainerID="83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" HandleID="k8s-pod-network.83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" Workload="ip--172--31--24--125-k8s-goldmane--768f4c5c69--kpgkp-eth0" Jul 6 23:29:08.927503 containerd[2034]: 2025-07-06 23:29:08.771 [INFO][5146] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" Namespace="calico-system" Pod="goldmane-768f4c5c69-kpgkp" WorkloadEndpoint="ip--172--31--24--125-k8s-goldmane--768f4c5c69--kpgkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--125-k8s-goldmane--768f4c5c69--kpgkp-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"fa2c4d51-a0cf-4405-85e5-c4308819e470", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-125", ContainerID:"", Pod:"goldmane-768f4c5c69-kpgkp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califec8ea072d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:08.927503 containerd[2034]: 2025-07-06 23:29:08.772 [INFO][5146] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.7/32] ContainerID="83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" Namespace="calico-system" Pod="goldmane-768f4c5c69-kpgkp" WorkloadEndpoint="ip--172--31--24--125-k8s-goldmane--768f4c5c69--kpgkp-eth0" Jul 6 23:29:08.927503 containerd[2034]: 2025-07-06 23:29:08.773 [INFO][5146] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califec8ea072d8 ContainerID="83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" Namespace="calico-system" Pod="goldmane-768f4c5c69-kpgkp" WorkloadEndpoint="ip--172--31--24--125-k8s-goldmane--768f4c5c69--kpgkp-eth0" Jul 6 23:29:08.927503 containerd[2034]: 2025-07-06 23:29:08.860 [INFO][5146] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" Namespace="calico-system" Pod="goldmane-768f4c5c69-kpgkp" WorkloadEndpoint="ip--172--31--24--125-k8s-goldmane--768f4c5c69--kpgkp-eth0" Jul 6 23:29:08.927503 containerd[2034]: 2025-07-06 23:29:08.868 [INFO][5146] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" Namespace="calico-system" Pod="goldmane-768f4c5c69-kpgkp" WorkloadEndpoint="ip--172--31--24--125-k8s-goldmane--768f4c5c69--kpgkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--125-k8s-goldmane--768f4c5c69--kpgkp-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"fa2c4d51-a0cf-4405-85e5-c4308819e470", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-125", ContainerID:"83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a", Pod:"goldmane-768f4c5c69-kpgkp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califec8ea072d8", MAC:"72:de:6c:fd:e9:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:08.927503 containerd[2034]: 2025-07-06 23:29:08.901 [INFO][5146] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" Namespace="calico-system" Pod="goldmane-768f4c5c69-kpgkp" WorkloadEndpoint="ip--172--31--24--125-k8s-goldmane--768f4c5c69--kpgkp-eth0" Jul 6 23:29:09.091259 systemd[1]: Started cri-containerd-2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da.scope - libcontainer container 2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da. Jul 6 23:29:09.113390 containerd[2034]: time="2025-07-06T23:29:09.112756688Z" level=info msg="StartContainer for \"7d17714064fc2a3ea9069726c7405ab2d57ccc31243332faf2b3041133e2711a\" returns successfully" Jul 6 23:29:09.149741 containerd[2034]: time="2025-07-06T23:29:09.149582145Z" level=info msg="connecting to shim 83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a" address="unix:///run/containerd/s/93909992b5f20d6cb77e0883c5cfcaa6fa9f5a2d2bfcf0668f5d363600ac12f5" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:29:09.157912 containerd[2034]: time="2025-07-06T23:29:09.157707105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4898,Uid:e0a04f64-8a0b-40ef-9fb0-940a3feff5bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41\"" Jul 6 23:29:09.174505 containerd[2034]: time="2025-07-06T23:29:09.174329961Z" level=info msg="CreateContainer within sandbox \"4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:29:09.244965 containerd[2034]: time="2025-07-06T23:29:09.244503489Z" level=info msg="Container 66a6fba8637d3e22ff2d26a380bc98261f1e4f488f9845a5221c65e93fb7f1f7: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:09.306157 containerd[2034]: time="2025-07-06T23:29:09.305367621Z" level=info msg="CreateContainer within sandbox \"4ba8be91c4ddaaf4a4f3fc59a193629d344cb0672c84cf632b241073beca5e41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66a6fba8637d3e22ff2d26a380bc98261f1e4f488f9845a5221c65e93fb7f1f7\"" Jul 6 23:29:09.308258 containerd[2034]: time="2025-07-06T23:29:09.307900569Z" level=info msg="StartContainer for \"66a6fba8637d3e22ff2d26a380bc98261f1e4f488f9845a5221c65e93fb7f1f7\"" Jul 6 23:29:09.316784 kubelet[3522]: I0706 23:29:09.315898 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5987859cb8-km2ch" podStartSLOduration=2.456241935 podStartE2EDuration="9.315872301s" podCreationTimestamp="2025-07-06 23:29:00 +0000 UTC" firstStartedPulling="2025-07-06 23:29:01.489272175 +0000 UTC m=+49.998585874" lastFinishedPulling="2025-07-06 23:29:08.348902541 +0000 UTC m=+56.858216240" observedRunningTime="2025-07-06 23:29:09.314262657 +0000 UTC m=+57.823576356" watchObservedRunningTime="2025-07-06 23:29:09.315872301 +0000 UTC m=+57.825185988" Jul 6 23:29:09.319376 containerd[2034]: time="2025-07-06T23:29:09.319065525Z" level=info msg="connecting to shim 66a6fba8637d3e22ff2d26a380bc98261f1e4f488f9845a5221c65e93fb7f1f7" address="unix:///run/containerd/s/2d3d65ae74c4c41f36f3238e1c00f074221b96d162e14c4c29ea57819136f60a" protocol=ttrpc version=3 Jul 6 23:29:09.378097 systemd[1]: Started cri-containerd-83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a.scope - libcontainer container 83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a. Jul 6 23:29:09.410099 systemd[1]: Started cri-containerd-66a6fba8637d3e22ff2d26a380bc98261f1e4f488f9845a5221c65e93fb7f1f7.scope - libcontainer container 66a6fba8637d3e22ff2d26a380bc98261f1e4f488f9845a5221c65e93fb7f1f7. Jul 6 23:29:09.522444 containerd[2034]: time="2025-07-06T23:29:09.522374902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb89dfdd6-4n8l4,Uid:ddd89310-58db-47b0-a7b4-d9cde8e0d91b,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da\"" Jul 6 23:29:09.583361 systemd-networkd[1820]: cali2d28b79a323: Link UP Jul 6 23:29:09.589380 systemd-networkd[1820]: cali2d28b79a323: Gained carrier Jul 6 23:29:09.605466 containerd[2034]: time="2025-07-06T23:29:09.604720199Z" level=info msg="StartContainer for \"66a6fba8637d3e22ff2d26a380bc98261f1e4f488f9845a5221c65e93fb7f1f7\" returns successfully" Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.160 [INFO][5290] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--fz7lq-eth0 calico-apiserver-6777f4cb5- calico-apiserver 681b5493-6ec2-48d8-b1bd-05c7e34a77d0 850 0 2025-07-06 23:28:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6777f4cb5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-125 calico-apiserver-6777f4cb5-fz7lq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2d28b79a323 [] [] }} ContainerID="1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" Namespace="calico-apiserver" Pod="calico-apiserver-6777f4cb5-fz7lq" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--fz7lq-" Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.160 [INFO][5290] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" Namespace="calico-apiserver" Pod="calico-apiserver-6777f4cb5-fz7lq" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--fz7lq-eth0" Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.352 [INFO][5387] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" HandleID="k8s-pod-network.1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" Workload="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--fz7lq-eth0" Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.358 [INFO][5387] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" HandleID="k8s-pod-network.1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" Workload="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--fz7lq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004d76b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-125", "pod":"calico-apiserver-6777f4cb5-fz7lq", "timestamp":"2025-07-06 23:29:09.352932742 +0000 UTC"}, Hostname:"ip-172-31-24-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.360 [INFO][5387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.360 [INFO][5387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.360 [INFO][5387] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-125' Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.417 [INFO][5387] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" host="ip-172-31-24-125" Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.431 [INFO][5387] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-125" Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.478 [INFO][5387] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.489 [INFO][5387] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.507 [INFO][5387] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ip-172-31-24-125" Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.509 [INFO][5387] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" host="ip-172-31-24-125" Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.513 [INFO][5387] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791 Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.531 [INFO][5387] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" host="ip-172-31-24-125" Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.557 [INFO][5387] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.8/26] block=192.168.35.0/26 handle="k8s-pod-network.1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" host="ip-172-31-24-125" Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.557 [INFO][5387] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.8/26] handle="k8s-pod-network.1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" host="ip-172-31-24-125" Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.558 [INFO][5387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:29:09.669306 containerd[2034]: 2025-07-06 23:29:09.558 [INFO][5387] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.8/26] IPv6=[] ContainerID="1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" HandleID="k8s-pod-network.1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" Workload="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--fz7lq-eth0" Jul 6 23:29:09.696573 containerd[2034]: 2025-07-06 23:29:09.568 [INFO][5290] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" Namespace="calico-apiserver" Pod="calico-apiserver-6777f4cb5-fz7lq" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--fz7lq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--fz7lq-eth0", GenerateName:"calico-apiserver-6777f4cb5-", Namespace:"calico-apiserver", SelfLink:"", UID:"681b5493-6ec2-48d8-b1bd-05c7e34a77d0", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6777f4cb5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-125", ContainerID:"", Pod:"calico-apiserver-6777f4cb5-fz7lq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d28b79a323", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:09.696573 containerd[2034]: 2025-07-06 23:29:09.568 [INFO][5290] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.8/32] ContainerID="1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" Namespace="calico-apiserver" Pod="calico-apiserver-6777f4cb5-fz7lq" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--fz7lq-eth0" Jul 6 23:29:09.696573 containerd[2034]: 2025-07-06 23:29:09.568 [INFO][5290] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d28b79a323 ContainerID="1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" Namespace="calico-apiserver" Pod="calico-apiserver-6777f4cb5-fz7lq" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--fz7lq-eth0" Jul 6 23:29:09.696573 containerd[2034]: 2025-07-06 23:29:09.593 [INFO][5290] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" Namespace="calico-apiserver" Pod="calico-apiserver-6777f4cb5-fz7lq" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--fz7lq-eth0" Jul 6 23:29:09.696573 containerd[2034]: 2025-07-06 23:29:09.597 [INFO][5290] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" Namespace="calico-apiserver" Pod="calico-apiserver-6777f4cb5-fz7lq" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--fz7lq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--fz7lq-eth0", GenerateName:"calico-apiserver-6777f4cb5-", Namespace:"calico-apiserver", SelfLink:"", UID:"681b5493-6ec2-48d8-b1bd-05c7e34a77d0", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6777f4cb5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-125", ContainerID:"1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791", Pod:"calico-apiserver-6777f4cb5-fz7lq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d28b79a323", MAC:"b2:4d:11:35:a7:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:09.696573 containerd[2034]: 2025-07-06 23:29:09.651 [INFO][5290] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" Namespace="calico-apiserver" Pod="calico-apiserver-6777f4cb5-fz7lq" WorkloadEndpoint="ip--172--31--24--125-k8s-calico--apiserver--6777f4cb5--fz7lq-eth0" Jul 6 23:29:09.841280 containerd[2034]: time="2025-07-06T23:29:09.841203660Z" level=info msg="connecting to shim 1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791" address="unix:///run/containerd/s/d713a09afaf3da167d90cb67122c59fc75b3dd74df96e466dd5a7ff0295a5e30" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:29:09.990597 systemd[1]: Started cri-containerd-1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791.scope - libcontainer container 1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791. Jul 6 23:29:10.021388 containerd[2034]: time="2025-07-06T23:29:10.021163677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-kpgkp,Uid:fa2c4d51-a0cf-4405-85e5-c4308819e470,Namespace:calico-system,Attempt:0,} returns sandbox id \"83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a\"" Jul 6 23:29:10.128124 containerd[2034]: time="2025-07-06T23:29:10.128049909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6777f4cb5-fz7lq,Uid:681b5493-6ec2-48d8-b1bd-05c7e34a77d0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791\"" Jul 6 23:29:10.210808 containerd[2034]: time="2025-07-06T23:29:10.210755554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:10.214255 containerd[2034]: time="2025-07-06T23:29:10.214196998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 6 23:29:10.216735 containerd[2034]: time="2025-07-06T23:29:10.216674842Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:10.222649 containerd[2034]: time="2025-07-06T23:29:10.222581218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:10.223884 containerd[2034]: time="2025-07-06T23:29:10.223824634Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.873364769s" Jul 6 23:29:10.224021 containerd[2034]: time="2025-07-06T23:29:10.223881286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 6 23:29:10.226278 containerd[2034]: time="2025-07-06T23:29:10.226229794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:29:10.230025 containerd[2034]: time="2025-07-06T23:29:10.229918174Z" level=info msg="CreateContainer within sandbox \"31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:29:10.240216 systemd-networkd[1820]: calib1494a30910: Gained IPv6LL Jul 6 23:29:10.262512 containerd[2034]: time="2025-07-06T23:29:10.262355314Z" level=info msg="Container 60d7b85a5f10f73f2f12096b467706c7c74fdad0d550b9ddcf6e89a3754853c2: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:10.295962 containerd[2034]: time="2025-07-06T23:29:10.295855810Z" level=info msg="CreateContainer within sandbox \"31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"60d7b85a5f10f73f2f12096b467706c7c74fdad0d550b9ddcf6e89a3754853c2\"" Jul 6 23:29:10.299470 containerd[2034]: time="2025-07-06T23:29:10.299368270Z" level=info msg="StartContainer for \"60d7b85a5f10f73f2f12096b467706c7c74fdad0d550b9ddcf6e89a3754853c2\"" Jul 6 23:29:10.308974 containerd[2034]: time="2025-07-06T23:29:10.308849194Z" level=info msg="connecting to shim 60d7b85a5f10f73f2f12096b467706c7c74fdad0d550b9ddcf6e89a3754853c2" address="unix:///run/containerd/s/3ef68d375dc5ce32111ea6ec71012464b06fe7635993192b343485863115d11f" protocol=ttrpc version=3 Jul 6 23:29:10.326179 kubelet[3522]: I0706 23:29:10.322428 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-g4898" podStartSLOduration=53.322410454 podStartE2EDuration="53.322410454s" podCreationTimestamp="2025-07-06 23:28:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:29:10.322348186 +0000 UTC m=+58.831661885" watchObservedRunningTime="2025-07-06 23:29:10.322410454 +0000 UTC m=+58.831724141" Jul 6 23:29:10.367254 systemd-networkd[1820]: calibbf6b95a3b7: Gained IPv6LL Jul 6 23:29:10.377766 systemd[1]: Started cri-containerd-60d7b85a5f10f73f2f12096b467706c7c74fdad0d550b9ddcf6e89a3754853c2.scope - libcontainer container 60d7b85a5f10f73f2f12096b467706c7c74fdad0d550b9ddcf6e89a3754853c2. Jul 6 23:29:10.558714 containerd[2034]: time="2025-07-06T23:29:10.558333408Z" level=info msg="StartContainer for \"60d7b85a5f10f73f2f12096b467706c7c74fdad0d550b9ddcf6e89a3754853c2\" returns successfully" Jul 6 23:29:10.751538 systemd-networkd[1820]: cali2d28b79a323: Gained IPv6LL Jul 6 23:29:10.815416 systemd-networkd[1820]: califec8ea072d8: Gained IPv6LL Jul 6 23:29:11.010148 systemd[1]: Started sshd@10-172.31.24.125:22-139.178.89.65:41504.service - OpenSSH per-connection server daemon (139.178.89.65:41504). Jul 6 23:29:11.220548 sshd[5560]: Accepted publickey for core from 139.178.89.65 port 41504 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:11.223888 sshd-session[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:11.234573 systemd-logind[2000]: New session 11 of user core. Jul 6 23:29:11.249261 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:29:11.665158 sshd[5562]: Connection closed by 139.178.89.65 port 41504 Jul 6 23:29:11.667293 sshd-session[5560]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:11.680328 systemd[1]: sshd@10-172.31.24.125:22-139.178.89.65:41504.service: Deactivated successfully. Jul 6 23:29:11.688626 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:29:11.700709 systemd-logind[2000]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:29:11.709165 systemd-logind[2000]: Removed session 11. Jul 6 23:29:12.942585 containerd[2034]: time="2025-07-06T23:29:12.942504411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:12.944602 containerd[2034]: time="2025-07-06T23:29:12.944522451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 6 23:29:12.947439 containerd[2034]: time="2025-07-06T23:29:12.947345043Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:12.951970 containerd[2034]: time="2025-07-06T23:29:12.951876927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:12.954224 containerd[2034]: time="2025-07-06T23:29:12.954084159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.726376085s" Jul 6 23:29:12.954224 containerd[2034]: time="2025-07-06T23:29:12.954160263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:29:12.956422 containerd[2034]: time="2025-07-06T23:29:12.956271639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:29:12.961870 containerd[2034]: time="2025-07-06T23:29:12.960041751Z" level=info msg="CreateContainer within sandbox \"5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:29:12.978447 containerd[2034]: time="2025-07-06T23:29:12.978378472Z" level=info msg="Container d86d71f213a5d3683a9032de4ec25307b7fecf0b77d4c02aaa268888c0667393: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:13.000664 containerd[2034]: time="2025-07-06T23:29:13.000520356Z" level=info msg="CreateContainer within sandbox \"5dbd18df6b3bd85004bfc0b672632d333ad0a745207d6f0b90dcdd1a4ab9f52b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d86d71f213a5d3683a9032de4ec25307b7fecf0b77d4c02aaa268888c0667393\"" Jul 6 23:29:13.002979 containerd[2034]: time="2025-07-06T23:29:13.002815032Z" level=info msg="StartContainer for \"d86d71f213a5d3683a9032de4ec25307b7fecf0b77d4c02aaa268888c0667393\"" Jul 6 23:29:13.005400 containerd[2034]: time="2025-07-06T23:29:13.005284716Z" level=info msg="connecting to shim d86d71f213a5d3683a9032de4ec25307b7fecf0b77d4c02aaa268888c0667393" address="unix:///run/containerd/s/6ce91a6b4b18683dba221d7b90ee36dd1e7c151dcfe6f432406e0b4189469eba" protocol=ttrpc version=3 Jul 6 23:29:13.059315 systemd[1]: Started cri-containerd-d86d71f213a5d3683a9032de4ec25307b7fecf0b77d4c02aaa268888c0667393.scope - libcontainer container d86d71f213a5d3683a9032de4ec25307b7fecf0b77d4c02aaa268888c0667393. Jul 6 23:29:13.174528 containerd[2034]: time="2025-07-06T23:29:13.174480133Z" level=info msg="StartContainer for \"d86d71f213a5d3683a9032de4ec25307b7fecf0b77d4c02aaa268888c0667393\" returns successfully" Jul 6 23:29:13.273169 ntpd[1992]: Listen normally on 7 vxlan.calico 192.168.35.0:123 Jul 6 23:29:13.276816 ntpd[1992]: 6 Jul 23:29:13 ntpd[1992]: Listen normally on 7 vxlan.calico 192.168.35.0:123 Jul 6 23:29:13.276816 ntpd[1992]: 6 Jul 23:29:13 ntpd[1992]: Listen normally on 8 cali704e56fa810 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 6 23:29:13.276816 ntpd[1992]: 6 Jul 23:29:13 ntpd[1992]: Listen normally on 9 vxlan.calico [fe80::64c0:fff:fe29:f69%5]:123 Jul 6 23:29:13.276816 ntpd[1992]: 6 Jul 23:29:13 ntpd[1992]: Listen normally on 10 cali580864c4c61 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 6 23:29:13.276816 ntpd[1992]: 6 Jul 23:29:13 ntpd[1992]: Listen normally on 11 cali7cd57f301f6 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 6 23:29:13.276816 ntpd[1992]: 6 Jul 23:29:13 ntpd[1992]: Listen normally on 12 calia306c17be05 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 6 23:29:13.276816 ntpd[1992]: 6 Jul 23:29:13 ntpd[1992]: Listen normally on 13 calibbf6b95a3b7 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 6 23:29:13.276816 ntpd[1992]: 6 Jul 23:29:13 ntpd[1992]: Listen normally on 14 calib1494a30910 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 6 23:29:13.276816 ntpd[1992]: 6 Jul 23:29:13 ntpd[1992]: Listen normally on 15 califec8ea072d8 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 6 23:29:13.276816 ntpd[1992]: 6 Jul 23:29:13 ntpd[1992]: Listen normally on 16 cali2d28b79a323 [fe80::ecee:eeff:feee:eeee%14]:123 Jul 6 23:29:13.273295 ntpd[1992]: Listen normally on 8 cali704e56fa810 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 6 23:29:13.273374 ntpd[1992]: Listen normally on 9 vxlan.calico [fe80::64c0:fff:fe29:f69%5]:123 Jul 6 23:29:13.273452 ntpd[1992]: Listen normally on 10 cali580864c4c61 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 6 23:29:13.273520 ntpd[1992]: Listen normally on 11 cali7cd57f301f6 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 6 23:29:13.273584 ntpd[1992]: Listen normally on 12 calia306c17be05 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 6 23:29:13.273657 ntpd[1992]: Listen normally on 13 calibbf6b95a3b7 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 6 23:29:13.273722 ntpd[1992]: Listen normally on 14 calib1494a30910 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 6 23:29:13.273786 ntpd[1992]: Listen normally on 15 califec8ea072d8 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 6 23:29:13.273849 ntpd[1992]: Listen normally on 16 cali2d28b79a323 [fe80::ecee:eeff:feee:eeee%14]:123 Jul 6 23:29:13.357828 kubelet[3522]: I0706 23:29:13.357685 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6777f4cb5-jqnmg" podStartSLOduration=37.86168041 podStartE2EDuration="43.357659557s" podCreationTimestamp="2025-07-06 23:28:30 +0000 UTC" firstStartedPulling="2025-07-06 23:29:07.459826796 +0000 UTC m=+55.969140471" lastFinishedPulling="2025-07-06 23:29:12.955805859 +0000 UTC m=+61.465119618" observedRunningTime="2025-07-06 23:29:13.356076253 +0000 UTC m=+61.865390012" watchObservedRunningTime="2025-07-06 23:29:13.357659557 +0000 UTC m=+61.866973244" Jul 6 23:29:14.338439 kubelet[3522]: I0706 23:29:14.337906 3522 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:29:16.704387 systemd[1]: Started sshd@11-172.31.24.125:22-139.178.89.65:41518.service - OpenSSH per-connection server daemon (139.178.89.65:41518). Jul 6 23:29:16.923473 sshd[5633]: Accepted publickey for core from 139.178.89.65 port 41518 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:16.928135 sshd-session[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:16.937381 systemd-logind[2000]: New session 12 of user core. Jul 6 23:29:16.947255 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:29:17.234742 sshd[5635]: Connection closed by 139.178.89.65 port 41518 Jul 6 23:29:17.235371 sshd-session[5633]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:17.248480 systemd[1]: sshd@11-172.31.24.125:22-139.178.89.65:41518.service: Deactivated successfully. Jul 6 23:29:17.257553 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:29:17.261361 systemd-logind[2000]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:29:17.283911 systemd[1]: Started sshd@12-172.31.24.125:22-139.178.89.65:41528.service - OpenSSH per-connection server daemon (139.178.89.65:41528). Jul 6 23:29:17.286349 systemd-logind[2000]: Removed session 12. Jul 6 23:29:17.503557 sshd[5652]: Accepted publickey for core from 139.178.89.65 port 41528 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:17.507277 sshd-session[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:17.519088 systemd-logind[2000]: New session 13 of user core. Jul 6 23:29:17.525535 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:29:17.980310 sshd[5654]: Connection closed by 139.178.89.65 port 41528 Jul 6 23:29:17.982025 sshd-session[5652]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:17.997390 systemd[1]: sshd@12-172.31.24.125:22-139.178.89.65:41528.service: Deactivated successfully. Jul 6 23:29:18.007912 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:29:18.016304 systemd-logind[2000]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:29:18.042275 systemd[1]: Started sshd@13-172.31.24.125:22-139.178.89.65:41534.service - OpenSSH per-connection server daemon (139.178.89.65:41534). Jul 6 23:29:18.051057 systemd-logind[2000]: Removed session 13. Jul 6 23:29:18.345881 sshd[5664]: Accepted publickey for core from 139.178.89.65 port 41534 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:18.352630 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:18.377755 systemd-logind[2000]: New session 14 of user core. Jul 6 23:29:18.384375 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:29:18.716774 sshd[5666]: Connection closed by 139.178.89.65 port 41534 Jul 6 23:29:18.719912 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:18.729588 systemd[1]: sshd@13-172.31.24.125:22-139.178.89.65:41534.service: Deactivated successfully. Jul 6 23:29:18.737654 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:29:18.741699 systemd-logind[2000]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:29:18.746548 systemd-logind[2000]: Removed session 14. Jul 6 23:29:18.996279 containerd[2034]: time="2025-07-06T23:29:18.996131733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:18.998208 containerd[2034]: time="2025-07-06T23:29:18.998142537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 6 23:29:18.999042 containerd[2034]: time="2025-07-06T23:29:18.998920425Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:19.003670 containerd[2034]: time="2025-07-06T23:29:19.003590886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:19.005801 containerd[2034]: time="2025-07-06T23:29:19.005730378Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 6.049121023s" Jul 6 23:29:19.005801 containerd[2034]: time="2025-07-06T23:29:19.005793654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 6 23:29:19.009970 containerd[2034]: time="2025-07-06T23:29:19.009286962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:29:19.037220 containerd[2034]: time="2025-07-06T23:29:19.037149750Z" level=info msg="CreateContainer within sandbox \"2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:29:19.051917 containerd[2034]: time="2025-07-06T23:29:19.051744642Z" level=info msg="Container b438a51ac7554806cc4594bcc2b57a5942d849e21b39e7db9bfab2619324a143: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:19.065479 containerd[2034]: time="2025-07-06T23:29:19.065405262Z" level=info msg="CreateContainer within sandbox \"2ac7233fafa3928df495d1a8d05f4d31d3cf9dc2f3233f9c3a11dc093c4c75da\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b438a51ac7554806cc4594bcc2b57a5942d849e21b39e7db9bfab2619324a143\"" Jul 6 23:29:19.066409 containerd[2034]: time="2025-07-06T23:29:19.066346626Z" level=info msg="StartContainer for \"b438a51ac7554806cc4594bcc2b57a5942d849e21b39e7db9bfab2619324a143\"" Jul 6 23:29:19.070417 containerd[2034]: time="2025-07-06T23:29:19.070245978Z" level=info msg="connecting to shim b438a51ac7554806cc4594bcc2b57a5942d849e21b39e7db9bfab2619324a143" address="unix:///run/containerd/s/15ff2782a741861f228edc26c807a4e7cbc339617dbcfbc0bf926f45cb3f030f" protocol=ttrpc version=3 Jul 6 23:29:19.114268 systemd[1]: Started cri-containerd-b438a51ac7554806cc4594bcc2b57a5942d849e21b39e7db9bfab2619324a143.scope - libcontainer container b438a51ac7554806cc4594bcc2b57a5942d849e21b39e7db9bfab2619324a143. Jul 6 23:29:19.205192 containerd[2034]: time="2025-07-06T23:29:19.205130827Z" level=info msg="StartContainer for \"b438a51ac7554806cc4594bcc2b57a5942d849e21b39e7db9bfab2619324a143\" returns successfully" Jul 6 23:29:19.506036 containerd[2034]: time="2025-07-06T23:29:19.505972532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b438a51ac7554806cc4594bcc2b57a5942d849e21b39e7db9bfab2619324a143\" id:\"9421f1a52566de3495a51f82a3e52a7319905c704a4a131aaf0f6db56dff3c46\" pid:5737 exited_at:{seconds:1751844559 nanos:505399052}" Jul 6 23:29:19.535149 kubelet[3522]: I0706 23:29:19.534787 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5cb89dfdd6-4n8l4" podStartSLOduration=28.052797668 podStartE2EDuration="37.534719744s" podCreationTimestamp="2025-07-06 23:28:42 +0000 UTC" firstStartedPulling="2025-07-06 23:29:09.525590734 +0000 UTC m=+58.034904433" lastFinishedPulling="2025-07-06 23:29:19.007512738 +0000 UTC m=+67.516826509" observedRunningTime="2025-07-06 23:29:19.429957188 +0000 UTC m=+67.939270887" watchObservedRunningTime="2025-07-06 23:29:19.534719744 +0000 UTC m=+68.044033443" Jul 6 23:29:21.116562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3042111137.mount: Deactivated successfully. Jul 6 23:29:21.865866 containerd[2034]: time="2025-07-06T23:29:21.865784712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:21.868489 containerd[2034]: time="2025-07-06T23:29:21.868348716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 6 23:29:21.872231 containerd[2034]: time="2025-07-06T23:29:21.872151768Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:21.882737 containerd[2034]: time="2025-07-06T23:29:21.882607164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:21.884420 containerd[2034]: time="2025-07-06T23:29:21.884366424Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.875015298s" Jul 6 23:29:21.884710 containerd[2034]: time="2025-07-06T23:29:21.884574612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 6 23:29:21.888769 containerd[2034]: time="2025-07-06T23:29:21.887818812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:29:21.894889 containerd[2034]: time="2025-07-06T23:29:21.893273784Z" level=info msg="CreateContainer within sandbox \"83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:29:21.928996 containerd[2034]: time="2025-07-06T23:29:21.928579464Z" level=info msg="Container c62008803abe83ccfdcde5419e39e9b9cc559126d1fdf60eb62c5f1d2fb1be2d: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:21.980970 containerd[2034]: time="2025-07-06T23:29:21.980861652Z" level=info msg="CreateContainer within sandbox \"83556ed8e53106a2e7b0ce5cf5bd0e58b6a79362150a43e0ca8187b640df692a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c62008803abe83ccfdcde5419e39e9b9cc559126d1fdf60eb62c5f1d2fb1be2d\"" Jul 6 23:29:21.981916 containerd[2034]: time="2025-07-06T23:29:21.981820116Z" level=info msg="StartContainer for \"c62008803abe83ccfdcde5419e39e9b9cc559126d1fdf60eb62c5f1d2fb1be2d\"" Jul 6 23:29:21.985665 containerd[2034]: time="2025-07-06T23:29:21.985586736Z" level=info msg="connecting to shim c62008803abe83ccfdcde5419e39e9b9cc559126d1fdf60eb62c5f1d2fb1be2d" address="unix:///run/containerd/s/93909992b5f20d6cb77e0883c5cfcaa6fa9f5a2d2bfcf0668f5d363600ac12f5" protocol=ttrpc version=3 Jul 6 23:29:22.033232 systemd[1]: Started cri-containerd-c62008803abe83ccfdcde5419e39e9b9cc559126d1fdf60eb62c5f1d2fb1be2d.scope - libcontainer container c62008803abe83ccfdcde5419e39e9b9cc559126d1fdf60eb62c5f1d2fb1be2d. Jul 6 23:29:22.167534 containerd[2034]: time="2025-07-06T23:29:22.167367861Z" level=info msg="StartContainer for \"c62008803abe83ccfdcde5419e39e9b9cc559126d1fdf60eb62c5f1d2fb1be2d\" returns successfully" Jul 6 23:29:22.253545 containerd[2034]: time="2025-07-06T23:29:22.253338262Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:22.255671 containerd[2034]: time="2025-07-06T23:29:22.255603250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:29:22.262701 containerd[2034]: time="2025-07-06T23:29:22.262625806Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 373.597346ms" Jul 6 23:29:22.262701 containerd[2034]: time="2025-07-06T23:29:22.262695730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:29:22.266315 containerd[2034]: time="2025-07-06T23:29:22.265600870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:29:22.267465 containerd[2034]: time="2025-07-06T23:29:22.267390886Z" level=info msg="CreateContainer within sandbox \"1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:29:22.290967 containerd[2034]: time="2025-07-06T23:29:22.289352710Z" level=info msg="Container 7124307146f36399780e6246d2c823b21be40a9e7ebb8ebd920d4cc8908f6b88: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:22.315734 containerd[2034]: time="2025-07-06T23:29:22.315653002Z" level=info msg="CreateContainer within sandbox \"1744a42f76ae083d9c90393bc68185be8db1a3cf3a91ab71cfe09b3cc0122791\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7124307146f36399780e6246d2c823b21be40a9e7ebb8ebd920d4cc8908f6b88\"" Jul 6 23:29:22.320206 containerd[2034]: time="2025-07-06T23:29:22.320120350Z" level=info msg="StartContainer for \"7124307146f36399780e6246d2c823b21be40a9e7ebb8ebd920d4cc8908f6b88\"" Jul 6 23:29:22.328107 containerd[2034]: time="2025-07-06T23:29:22.327857542Z" level=info msg="connecting to shim 7124307146f36399780e6246d2c823b21be40a9e7ebb8ebd920d4cc8908f6b88" address="unix:///run/containerd/s/d713a09afaf3da167d90cb67122c59fc75b3dd74df96e466dd5a7ff0295a5e30" protocol=ttrpc version=3 Jul 6 23:29:22.369260 systemd[1]: Started cri-containerd-7124307146f36399780e6246d2c823b21be40a9e7ebb8ebd920d4cc8908f6b88.scope - libcontainer container 7124307146f36399780e6246d2c823b21be40a9e7ebb8ebd920d4cc8908f6b88. Jul 6 23:29:22.453875 kubelet[3522]: I0706 23:29:22.453657 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-kpgkp" podStartSLOduration=29.594130308 podStartE2EDuration="41.453631343s" podCreationTimestamp="2025-07-06 23:28:41 +0000 UTC" firstStartedPulling="2025-07-06 23:29:10.027726873 +0000 UTC m=+58.537040560" lastFinishedPulling="2025-07-06 23:29:21.887227836 +0000 UTC m=+70.396541595" observedRunningTime="2025-07-06 23:29:22.445435907 +0000 UTC m=+70.954749702" watchObservedRunningTime="2025-07-06 23:29:22.453631343 +0000 UTC m=+70.962945030" Jul 6 23:29:22.502988 containerd[2034]: time="2025-07-06T23:29:22.502555091Z" level=info msg="StartContainer for \"7124307146f36399780e6246d2c823b21be40a9e7ebb8ebd920d4cc8908f6b88\" returns successfully" Jul 6 23:29:23.466334 kubelet[3522]: I0706 23:29:23.466227 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6777f4cb5-fz7lq" podStartSLOduration=41.332922167 podStartE2EDuration="53.465792288s" podCreationTimestamp="2025-07-06 23:28:30 +0000 UTC" firstStartedPulling="2025-07-06 23:29:10.131833473 +0000 UTC m=+58.641147160" lastFinishedPulling="2025-07-06 23:29:22.264703546 +0000 UTC m=+70.774017281" observedRunningTime="2025-07-06 23:29:23.464498628 +0000 UTC m=+71.973812459" watchObservedRunningTime="2025-07-06 23:29:23.465792288 +0000 UTC m=+71.975105975" Jul 6 23:29:23.765895 systemd[1]: Started sshd@14-172.31.24.125:22-139.178.89.65:40796.service - OpenSSH per-connection server daemon (139.178.89.65:40796). Jul 6 23:29:24.020337 containerd[2034]: time="2025-07-06T23:29:24.019575274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c62008803abe83ccfdcde5419e39e9b9cc559126d1fdf60eb62c5f1d2fb1be2d\" id:\"c0f70df7ad4681763d03a5cd80d13c6f2b19836b8544cb7691b39f9043e4a8df\" pid:5854 exit_status:1 exited_at:{seconds:1751844564 nanos:18001582}" Jul 6 23:29:24.086194 sshd[5873]: Accepted publickey for core from 139.178.89.65 port 40796 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:24.094582 sshd-session[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:24.114459 systemd-logind[2000]: New session 15 of user core. Jul 6 23:29:24.121324 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:29:24.341716 containerd[2034]: time="2025-07-06T23:29:24.340610316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:24.341716 containerd[2034]: time="2025-07-06T23:29:24.341184444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 6 23:29:24.351600 containerd[2034]: time="2025-07-06T23:29:24.351518232Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:24.353178 containerd[2034]: time="2025-07-06T23:29:24.352898940Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 2.087221942s" Jul 6 23:29:24.353178 containerd[2034]: time="2025-07-06T23:29:24.352988568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 6 23:29:24.354290 containerd[2034]: time="2025-07-06T23:29:24.354158640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:24.368963 containerd[2034]: time="2025-07-06T23:29:24.368842284Z" level=info msg="CreateContainer within sandbox \"31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:29:24.405976 containerd[2034]: time="2025-07-06T23:29:24.402862812Z" level=info msg="Container 20eb92f0a74b26c5609fde8e416d7e7a06492cb11a72fa23eb7453c3fa2bf329: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:24.446211 containerd[2034]: time="2025-07-06T23:29:24.446115733Z" level=info msg="CreateContainer within sandbox \"31ccb4a87461eef1369e039b5d8ed6fef21216d448ecd4ee96c2801c90d7d948\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"20eb92f0a74b26c5609fde8e416d7e7a06492cb11a72fa23eb7453c3fa2bf329\"" Jul 6 23:29:24.449847 containerd[2034]: time="2025-07-06T23:29:24.449406925Z" level=info msg="StartContainer for \"20eb92f0a74b26c5609fde8e416d7e7a06492cb11a72fa23eb7453c3fa2bf329\"" Jul 6 23:29:24.461658 containerd[2034]: time="2025-07-06T23:29:24.461425609Z" level=info msg="connecting to shim 20eb92f0a74b26c5609fde8e416d7e7a06492cb11a72fa23eb7453c3fa2bf329" address="unix:///run/containerd/s/3ef68d375dc5ce32111ea6ec71012464b06fe7635993192b343485863115d11f" protocol=ttrpc version=3 Jul 6 23:29:24.465839 kubelet[3522]: I0706 23:29:24.465247 3522 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:29:24.545602 systemd[1]: Started cri-containerd-20eb92f0a74b26c5609fde8e416d7e7a06492cb11a72fa23eb7453c3fa2bf329.scope - libcontainer container 20eb92f0a74b26c5609fde8e416d7e7a06492cb11a72fa23eb7453c3fa2bf329. Jul 6 23:29:24.563739 sshd[5877]: Connection closed by 139.178.89.65 port 40796 Jul 6 23:29:24.564627 sshd-session[5873]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:24.580375 systemd[1]: sshd@14-172.31.24.125:22-139.178.89.65:40796.service: Deactivated successfully. Jul 6 23:29:24.585974 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:29:24.590422 systemd-logind[2000]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:29:24.597134 systemd-logind[2000]: Removed session 15. Jul 6 23:29:24.754208 containerd[2034]: time="2025-07-06T23:29:24.754144190Z" level=info msg="StartContainer for \"20eb92f0a74b26c5609fde8e416d7e7a06492cb11a72fa23eb7453c3fa2bf329\" returns successfully" Jul 6 23:29:24.973197 kubelet[3522]: I0706 23:29:24.973046 3522 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:29:24.975131 kubelet[3522]: I0706 23:29:24.973473 3522 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:29:24.986354 containerd[2034]: time="2025-07-06T23:29:24.986247135Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c62008803abe83ccfdcde5419e39e9b9cc559126d1fdf60eb62c5f1d2fb1be2d\" id:\"07b605b825533b101b45f02d0b9ef8a8e46152afc368d4c8fcaee99878ae01e1\" pid:5912 exit_status:1 exited_at:{seconds:1751844564 nanos:985242135}" Jul 6 23:29:25.506158 kubelet[3522]: I0706 23:29:25.505985 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7qxkc" podStartSLOduration=24.966066358 podStartE2EDuration="43.50563103s" podCreationTimestamp="2025-07-06 23:28:42 +0000 UTC" firstStartedPulling="2025-07-06 23:29:05.81862316 +0000 UTC m=+54.327936847" lastFinishedPulling="2025-07-06 23:29:24.358187844 +0000 UTC m=+72.867501519" observedRunningTime="2025-07-06 23:29:25.504966278 +0000 UTC m=+74.014280001" watchObservedRunningTime="2025-07-06 23:29:25.50563103 +0000 UTC m=+74.014944705" Jul 6 23:29:29.602657 systemd[1]: Started sshd@15-172.31.24.125:22-139.178.89.65:42650.service - OpenSSH per-connection server daemon (139.178.89.65:42650). Jul 6 23:29:29.819265 sshd[5945]: Accepted publickey for core from 139.178.89.65 port 42650 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:29.823219 sshd-session[5945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:29.833228 systemd-logind[2000]: New session 16 of user core. Jul 6 23:29:29.841325 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:29:30.116750 sshd[5947]: Connection closed by 139.178.89.65 port 42650 Jul 6 23:29:30.117852 sshd-session[5945]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:30.124250 systemd[1]: sshd@15-172.31.24.125:22-139.178.89.65:42650.service: Deactivated successfully. Jul 6 23:29:30.128733 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:29:30.131066 systemd-logind[2000]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:29:30.134279 systemd-logind[2000]: Removed session 16. Jul 6 23:29:31.279844 containerd[2034]: time="2025-07-06T23:29:31.279572166Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12bb4e5a2182d80b0ed0e51d226ee2740e009426df04eb1fb19344aef1596423\" id:\"3edef2e6ea5c91bb5f86fdecff1c64a1a4734d8125def89c31b15236b5899e0a\" pid:5972 exited_at:{seconds:1751844571 nanos:279205254}" Jul 6 23:29:35.156863 systemd[1]: Started sshd@16-172.31.24.125:22-139.178.89.65:42666.service - OpenSSH per-connection server daemon (139.178.89.65:42666). Jul 6 23:29:35.365760 sshd[5985]: Accepted publickey for core from 139.178.89.65 port 42666 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:35.368727 sshd-session[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:35.377059 systemd-logind[2000]: New session 17 of user core. Jul 6 23:29:35.386256 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:29:35.642213 sshd[5988]: Connection closed by 139.178.89.65 port 42666 Jul 6 23:29:35.643101 sshd-session[5985]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:35.651118 systemd[1]: sshd@16-172.31.24.125:22-139.178.89.65:42666.service: Deactivated successfully. Jul 6 23:29:35.655853 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:29:35.658379 systemd-logind[2000]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:29:35.663635 systemd-logind[2000]: Removed session 17. Jul 6 23:29:36.017430 kubelet[3522]: I0706 23:29:36.017094 3522 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:29:40.685011 systemd[1]: Started sshd@17-172.31.24.125:22-139.178.89.65:56034.service - OpenSSH per-connection server daemon (139.178.89.65:56034). Jul 6 23:29:40.925301 sshd[6004]: Accepted publickey for core from 139.178.89.65 port 56034 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:40.928067 sshd-session[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:40.940882 systemd-logind[2000]: New session 18 of user core. Jul 6 23:29:40.945735 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:29:41.248191 sshd[6006]: Connection closed by 139.178.89.65 port 56034 Jul 6 23:29:41.247648 sshd-session[6004]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:41.257327 systemd[1]: sshd@17-172.31.24.125:22-139.178.89.65:56034.service: Deactivated successfully. Jul 6 23:29:41.264227 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:29:41.270409 systemd-logind[2000]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:29:41.288442 systemd[1]: Started sshd@18-172.31.24.125:22-139.178.89.65:56044.service - OpenSSH per-connection server daemon (139.178.89.65:56044). Jul 6 23:29:41.293929 systemd-logind[2000]: Removed session 18. Jul 6 23:29:41.488354 sshd[6018]: Accepted publickey for core from 139.178.89.65 port 56044 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:41.492121 sshd-session[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:41.506110 systemd-logind[2000]: New session 19 of user core. Jul 6 23:29:41.542302 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:29:42.261066 sshd[6020]: Connection closed by 139.178.89.65 port 56044 Jul 6 23:29:42.264519 sshd-session[6018]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:42.272477 systemd[1]: sshd@18-172.31.24.125:22-139.178.89.65:56044.service: Deactivated successfully. Jul 6 23:29:42.284545 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:29:42.308252 systemd-logind[2000]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:29:42.314114 systemd[1]: Started sshd@19-172.31.24.125:22-139.178.89.65:56048.service - OpenSSH per-connection server daemon (139.178.89.65:56048). Jul 6 23:29:42.317705 systemd-logind[2000]: Removed session 19. Jul 6 23:29:42.529567 sshd[6030]: Accepted publickey for core from 139.178.89.65 port 56048 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:42.534356 sshd-session[6030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:42.554777 systemd-logind[2000]: New session 20 of user core. Jul 6 23:29:42.561558 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:29:44.390062 sshd[6032]: Connection closed by 139.178.89.65 port 56048 Jul 6 23:29:44.391010 sshd-session[6030]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:44.401527 systemd[1]: sshd@19-172.31.24.125:22-139.178.89.65:56048.service: Deactivated successfully. Jul 6 23:29:44.413177 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:29:44.419824 systemd-logind[2000]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:29:44.449410 systemd[1]: Started sshd@20-172.31.24.125:22-139.178.89.65:56056.service - OpenSSH per-connection server daemon (139.178.89.65:56056). Jul 6 23:29:44.453887 systemd-logind[2000]: Removed session 20. Jul 6 23:29:44.687086 sshd[6052]: Accepted publickey for core from 139.178.89.65 port 56056 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:44.690878 sshd-session[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:44.706242 systemd-logind[2000]: New session 21 of user core. Jul 6 23:29:44.714545 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:29:45.335762 sshd[6055]: Connection closed by 139.178.89.65 port 56056 Jul 6 23:29:45.336628 sshd-session[6052]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:45.346851 systemd[1]: sshd@20-172.31.24.125:22-139.178.89.65:56056.service: Deactivated successfully. Jul 6 23:29:45.355932 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:29:45.360280 systemd-logind[2000]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:29:45.380643 systemd[1]: Started sshd@21-172.31.24.125:22-139.178.89.65:56072.service - OpenSSH per-connection server daemon (139.178.89.65:56072). Jul 6 23:29:45.387021 systemd-logind[2000]: Removed session 21. Jul 6 23:29:45.591260 sshd[6067]: Accepted publickey for core from 139.178.89.65 port 56072 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:45.593219 sshd-session[6067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:45.606550 systemd-logind[2000]: New session 22 of user core. Jul 6 23:29:45.614574 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:29:45.936543 sshd[6069]: Connection closed by 139.178.89.65 port 56072 Jul 6 23:29:45.938604 sshd-session[6067]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:45.946636 systemd-logind[2000]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:29:45.948661 systemd[1]: sshd@21-172.31.24.125:22-139.178.89.65:56072.service: Deactivated successfully. Jul 6 23:29:45.953496 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:29:45.958308 systemd-logind[2000]: Removed session 22. Jul 6 23:29:49.462919 containerd[2034]: time="2025-07-06T23:29:49.462853765Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b438a51ac7554806cc4594bcc2b57a5942d849e21b39e7db9bfab2619324a143\" id:\"809578c099131f8d856a531b5e38ac0cfbded7b16f864945fa0c685e68535c8a\" pid:6096 exited_at:{seconds:1751844589 nanos:462170449}" Jul 6 23:29:50.975392 systemd[1]: Started sshd@22-172.31.24.125:22-139.178.89.65:47810.service - OpenSSH per-connection server daemon (139.178.89.65:47810). Jul 6 23:29:51.187022 sshd[6109]: Accepted publickey for core from 139.178.89.65 port 47810 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:51.190115 sshd-session[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:51.203145 systemd-logind[2000]: New session 23 of user core. Jul 6 23:29:51.209285 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:29:51.528199 sshd[6111]: Connection closed by 139.178.89.65 port 47810 Jul 6 23:29:51.527354 sshd-session[6109]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:51.534562 systemd[1]: sshd@22-172.31.24.125:22-139.178.89.65:47810.service: Deactivated successfully. Jul 6 23:29:51.543233 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:29:51.549652 systemd-logind[2000]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:29:51.554612 systemd-logind[2000]: Removed session 23. Jul 6 23:29:54.786291 containerd[2034]: time="2025-07-06T23:29:54.786221179Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c62008803abe83ccfdcde5419e39e9b9cc559126d1fdf60eb62c5f1d2fb1be2d\" id:\"0c371d2a01d044db4f62d3762bd298f033decc826d53b8755e430e9df5132930\" pid:6138 exited_at:{seconds:1751844594 nanos:785231503}" Jul 6 23:29:56.302969 containerd[2034]: time="2025-07-06T23:29:56.302659279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c62008803abe83ccfdcde5419e39e9b9cc559126d1fdf60eb62c5f1d2fb1be2d\" id:\"00b3de23eb29b4246ad9b83f60ba9a474bafa82133434b95ab8e754f11d94584\" pid:6164 exited_at:{seconds:1751844596 nanos:302124967}" Jul 6 23:29:56.569072 systemd[1]: Started sshd@23-172.31.24.125:22-139.178.89.65:47826.service - OpenSSH per-connection server daemon (139.178.89.65:47826). Jul 6 23:29:56.786181 sshd[6175]: Accepted publickey for core from 139.178.89.65 port 47826 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:56.789371 sshd-session[6175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:56.799253 systemd-logind[2000]: New session 24 of user core. Jul 6 23:29:56.807544 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:29:57.110975 sshd[6177]: Connection closed by 139.178.89.65 port 47826 Jul 6 23:29:57.111755 sshd-session[6175]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:57.119348 systemd[1]: sshd@23-172.31.24.125:22-139.178.89.65:47826.service: Deactivated successfully. Jul 6 23:29:57.129179 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:29:57.135762 systemd-logind[2000]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:29:57.141284 systemd-logind[2000]: Removed session 24. Jul 6 23:29:57.400101 containerd[2034]: time="2025-07-06T23:29:57.399396752Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b438a51ac7554806cc4594bcc2b57a5942d849e21b39e7db9bfab2619324a143\" id:\"5231a24a152d95cb957ccf37a40dcae2d958ab47c8ac04f9537de8829081d1a2\" pid:6201 exited_at:{seconds:1751844597 nanos:398337164}" Jul 6 23:30:01.329621 containerd[2034]: time="2025-07-06T23:30:01.329543328Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12bb4e5a2182d80b0ed0e51d226ee2740e009426df04eb1fb19344aef1596423\" id:\"322661001403090b2d276cf2ca2a1056567081130347f59fd7d4e8717f220612\" pid:6223 exited_at:{seconds:1751844601 nanos:328351872}" Jul 6 23:30:02.152433 systemd[1]: Started sshd@24-172.31.24.125:22-139.178.89.65:46596.service - OpenSSH per-connection server daemon (139.178.89.65:46596). Jul 6 23:30:02.375988 sshd[6236]: Accepted publickey for core from 139.178.89.65 port 46596 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:30:02.381355 sshd-session[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:02.395377 systemd-logind[2000]: New session 25 of user core. Jul 6 23:30:02.401448 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:30:02.719277 sshd[6238]: Connection closed by 139.178.89.65 port 46596 Jul 6 23:30:02.721237 sshd-session[6236]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:02.729380 systemd-logind[2000]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:30:02.731768 systemd[1]: sshd@24-172.31.24.125:22-139.178.89.65:46596.service: Deactivated successfully. Jul 6 23:30:02.741102 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:30:02.744826 systemd-logind[2000]: Removed session 25. Jul 6 23:30:05.338282 update_engine[2003]: I20250706 23:30:05.338208 2003 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 6 23:30:05.339230 update_engine[2003]: I20250706 23:30:05.338972 2003 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 6 23:30:05.340452 update_engine[2003]: I20250706 23:30:05.340394 2003 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 6 23:30:05.342142 update_engine[2003]: I20250706 23:30:05.341487 2003 omaha_request_params.cc:62] Current group set to alpha Jul 6 23:30:05.342555 update_engine[2003]: I20250706 23:30:05.342506 2003 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 6 23:30:05.343000 update_engine[2003]: I20250706 23:30:05.342644 2003 update_attempter.cc:643] Scheduling an action processor start. Jul 6 23:30:05.343000 update_engine[2003]: I20250706 23:30:05.342694 2003 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 6 23:30:05.343000 update_engine[2003]: I20250706 23:30:05.342762 2003 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 6 23:30:05.343000 update_engine[2003]: I20250706 23:30:05.342888 2003 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 6 23:30:05.343000 update_engine[2003]: I20250706 23:30:05.342908 2003 omaha_request_action.cc:272] Request: Jul 6 23:30:05.343000 update_engine[2003]: Jul 6 23:30:05.343000 update_engine[2003]: Jul 6 23:30:05.343000 update_engine[2003]: Jul 6 23:30:05.343000 update_engine[2003]: Jul 6 23:30:05.343000 update_engine[2003]: Jul 6 23:30:05.343000 update_engine[2003]: Jul 6 23:30:05.343000 update_engine[2003]: Jul 6 23:30:05.343000 update_engine[2003]: Jul 6 23:30:05.343000 update_engine[2003]: I20250706 23:30:05.342924 2003 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:30:05.344389 locksmithd[2043]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 6 23:30:05.358634 update_engine[2003]: I20250706 23:30:05.356344 2003 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:30:05.358634 update_engine[2003]: I20250706 23:30:05.357757 2003 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:30:05.392729 update_engine[2003]: E20250706 23:30:05.392661 2003 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:30:05.394217 update_engine[2003]: I20250706 23:30:05.394148 2003 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 6 23:30:07.758397 systemd[1]: Started sshd@25-172.31.24.125:22-139.178.89.65:46602.service - OpenSSH per-connection server daemon (139.178.89.65:46602). Jul 6 23:30:07.963048 sshd[6251]: Accepted publickey for core from 139.178.89.65 port 46602 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:30:07.965894 sshd-session[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:07.974877 systemd-logind[2000]: New session 26 of user core. Jul 6 23:30:07.986569 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:30:08.260072 sshd[6253]: Connection closed by 139.178.89.65 port 46602 Jul 6 23:30:08.262235 sshd-session[6251]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:08.270455 systemd[1]: sshd@25-172.31.24.125:22-139.178.89.65:46602.service: Deactivated successfully. Jul 6 23:30:08.276850 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:30:08.279856 systemd-logind[2000]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:30:08.286234 systemd-logind[2000]: Removed session 26. Jul 6 23:30:13.305118 systemd[1]: Started sshd@26-172.31.24.125:22-139.178.89.65:54196.service - OpenSSH per-connection server daemon (139.178.89.65:54196). Jul 6 23:30:13.527319 sshd[6268]: Accepted publickey for core from 139.178.89.65 port 54196 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:30:13.531207 sshd-session[6268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:13.545137 systemd-logind[2000]: New session 27 of user core. Jul 6 23:30:13.554243 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 6 23:30:13.848547 sshd[6270]: Connection closed by 139.178.89.65 port 54196 Jul 6 23:30:13.849099 sshd-session[6268]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:13.860316 systemd[1]: sshd@26-172.31.24.125:22-139.178.89.65:54196.service: Deactivated successfully. Jul 6 23:30:13.865560 systemd[1]: session-27.scope: Deactivated successfully. Jul 6 23:30:13.869505 systemd-logind[2000]: Session 27 logged out. Waiting for processes to exit. Jul 6 23:30:13.873536 systemd-logind[2000]: Removed session 27. Jul 6 23:30:15.337073 update_engine[2003]: I20250706 23:30:15.336987 2003 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:30:15.338105 update_engine[2003]: I20250706 23:30:15.337373 2003 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:30:15.338105 update_engine[2003]: I20250706 23:30:15.337742 2003 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:30:15.347049 update_engine[2003]: E20250706 23:30:15.346972 2003 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:30:15.347201 update_engine[2003]: I20250706 23:30:15.347091 2003 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 6 23:30:18.893373 systemd[1]: Started sshd@27-172.31.24.125:22-139.178.89.65:54210.service - OpenSSH per-connection server daemon (139.178.89.65:54210). Jul 6 23:30:19.104739 sshd[6285]: Accepted publickey for core from 139.178.89.65 port 54210 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:30:19.108181 sshd-session[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:19.118641 systemd-logind[2000]: New session 28 of user core. Jul 6 23:30:19.124242 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 6 23:30:19.404307 sshd[6287]: Connection closed by 139.178.89.65 port 54210 Jul 6 23:30:19.404799 sshd-session[6285]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:19.418335 systemd[1]: sshd@27-172.31.24.125:22-139.178.89.65:54210.service: Deactivated successfully. Jul 6 23:30:19.426059 systemd[1]: session-28.scope: Deactivated successfully. Jul 6 23:30:19.431192 systemd-logind[2000]: Session 28 logged out. Waiting for processes to exit. Jul 6 23:30:19.438037 systemd-logind[2000]: Removed session 28. Jul 6 23:30:19.475732 containerd[2034]: time="2025-07-06T23:30:19.475624506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b438a51ac7554806cc4594bcc2b57a5942d849e21b39e7db9bfab2619324a143\" id:\"85aab50e15d1d442089968e4aafabe693ce1c96396706991c7be94ed44664b23\" pid:6307 exited_at:{seconds:1751844619 nanos:474235254}" Jul 6 23:30:24.591684 containerd[2034]: time="2025-07-06T23:30:24.591616895Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c62008803abe83ccfdcde5419e39e9b9cc559126d1fdf60eb62c5f1d2fb1be2d\" id:\"39c4b5c05b06bc4797fcfccb7a186e16dee4c8d793fd3c3068999116ee5aecf7\" pid:6338 exited_at:{seconds:1751844624 nanos:590901335}" Jul 6 23:30:25.341960 update_engine[2003]: I20250706 23:30:25.341863 2003 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:30:25.342459 update_engine[2003]: I20250706 23:30:25.342236 2003 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:30:25.342659 update_engine[2003]: I20250706 23:30:25.342585 2003 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:30:25.343675 update_engine[2003]: E20250706 23:30:25.343612 2003 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:30:25.343752 update_engine[2003]: I20250706 23:30:25.343698 2003 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 6 23:30:31.267404 containerd[2034]: time="2025-07-06T23:30:31.267339844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12bb4e5a2182d80b0ed0e51d226ee2740e009426df04eb1fb19344aef1596423\" id:\"89401ce7f8078dc913b91d90e0316adf8d396cd03e33c60c7f2ffb05e32b8358\" pid:6363 exited_at:{seconds:1751844631 nanos:266894716}" Jul 6 23:30:33.914163 systemd[1]: cri-containerd-44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc.scope: Deactivated successfully. Jul 6 23:30:33.915618 systemd[1]: cri-containerd-44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc.scope: Consumed 28.575s CPU time, 116M memory peak, 416K read from disk. Jul 6 23:30:33.920696 containerd[2034]: time="2025-07-06T23:30:33.920633158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc\" id:\"44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc\" pid:3841 exit_status:1 exited_at:{seconds:1751844633 nanos:919718074}" Jul 6 23:30:33.947270 containerd[2034]: time="2025-07-06T23:30:33.947174326Z" level=info msg="received exit event container_id:\"44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc\" id:\"44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc\" pid:3841 exit_status:1 exited_at:{seconds:1751844633 nanos:919718074}" Jul 6 23:30:33.998640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc-rootfs.mount: Deactivated successfully. Jul 6 23:30:34.303925 kubelet[3522]: E0706 23:30:34.303851 3522 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-125?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 6 23:30:34.726169 systemd[1]: cri-containerd-4e1a3989ae7d03f26fdc9d19a51aa7de8373634fdfecd0b3dfe4e394b4fc63a3.scope: Deactivated successfully. Jul 6 23:30:34.726744 systemd[1]: cri-containerd-4e1a3989ae7d03f26fdc9d19a51aa7de8373634fdfecd0b3dfe4e394b4fc63a3.scope: Consumed 4.929s CPU time, 58.8M memory peak, 380K read from disk. Jul 6 23:30:34.741165 containerd[2034]: time="2025-07-06T23:30:34.740833666Z" level=info msg="received exit event container_id:\"4e1a3989ae7d03f26fdc9d19a51aa7de8373634fdfecd0b3dfe4e394b4fc63a3\" id:\"4e1a3989ae7d03f26fdc9d19a51aa7de8373634fdfecd0b3dfe4e394b4fc63a3\" pid:3147 exit_status:1 exited_at:{seconds:1751844634 nanos:739992874}" Jul 6 23:30:34.743240 containerd[2034]: time="2025-07-06T23:30:34.743140306Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e1a3989ae7d03f26fdc9d19a51aa7de8373634fdfecd0b3dfe4e394b4fc63a3\" id:\"4e1a3989ae7d03f26fdc9d19a51aa7de8373634fdfecd0b3dfe4e394b4fc63a3\" pid:3147 exit_status:1 exited_at:{seconds:1751844634 nanos:739992874}" Jul 6 23:30:34.764600 kubelet[3522]: I0706 23:30:34.764122 3522 scope.go:117] "RemoveContainer" containerID="44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc" Jul 6 23:30:34.770267 containerd[2034]: time="2025-07-06T23:30:34.770206930Z" level=info msg="CreateContainer within sandbox \"1fa96f16519ed729e9deef0977a369ef5189bd8d6cb6933fd7aad2b2ebc7bebf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 6 23:30:34.791242 containerd[2034]: time="2025-07-06T23:30:34.791173174Z" level=info msg="Container 9092e42acbf06f813f84c8b5dc4f2d3018cae83b02b6da83d3de9d22999f0f50: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:30:34.814114 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e1a3989ae7d03f26fdc9d19a51aa7de8373634fdfecd0b3dfe4e394b4fc63a3-rootfs.mount: Deactivated successfully. Jul 6 23:30:34.814860 containerd[2034]: time="2025-07-06T23:30:34.814265314Z" level=info msg="CreateContainer within sandbox \"1fa96f16519ed729e9deef0977a369ef5189bd8d6cb6933fd7aad2b2ebc7bebf\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9092e42acbf06f813f84c8b5dc4f2d3018cae83b02b6da83d3de9d22999f0f50\"" Jul 6 23:30:34.816403 containerd[2034]: time="2025-07-06T23:30:34.816328738Z" level=info msg="StartContainer for \"9092e42acbf06f813f84c8b5dc4f2d3018cae83b02b6da83d3de9d22999f0f50\"" Jul 6 23:30:34.819629 containerd[2034]: time="2025-07-06T23:30:34.819548818Z" level=info msg="connecting to shim 9092e42acbf06f813f84c8b5dc4f2d3018cae83b02b6da83d3de9d22999f0f50" address="unix:///run/containerd/s/54ede778b0e2970155189693bff55713e36304b200519eec5b99ca68c7f95958" protocol=ttrpc version=3 Jul 6 23:30:34.867245 systemd[1]: Started cri-containerd-9092e42acbf06f813f84c8b5dc4f2d3018cae83b02b6da83d3de9d22999f0f50.scope - libcontainer container 9092e42acbf06f813f84c8b5dc4f2d3018cae83b02b6da83d3de9d22999f0f50. Jul 6 23:30:34.927623 containerd[2034]: time="2025-07-06T23:30:34.927551831Z" level=info msg="StartContainer for \"9092e42acbf06f813f84c8b5dc4f2d3018cae83b02b6da83d3de9d22999f0f50\" returns successfully" Jul 6 23:30:35.337010 update_engine[2003]: I20250706 23:30:35.336426 2003 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:30:35.337010 update_engine[2003]: I20250706 23:30:35.336807 2003 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:30:35.337592 update_engine[2003]: I20250706 23:30:35.337260 2003 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:30:35.338313 update_engine[2003]: E20250706 23:30:35.338248 2003 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:30:35.338417 update_engine[2003]: I20250706 23:30:35.338333 2003 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 6 23:30:35.338417 update_engine[2003]: I20250706 23:30:35.338353 2003 omaha_request_action.cc:617] Omaha request response: Jul 6 23:30:35.338520 update_engine[2003]: E20250706 23:30:35.338464 2003 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 6 23:30:35.338574 update_engine[2003]: I20250706 23:30:35.338543 2003 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 6 23:30:35.338574 update_engine[2003]: I20250706 23:30:35.338562 2003 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:30:35.338677 update_engine[2003]: I20250706 23:30:35.338577 2003 update_attempter.cc:306] Processing Done. Jul 6 23:30:35.338677 update_engine[2003]: E20250706 23:30:35.338601 2003 update_attempter.cc:619] Update failed. Jul 6 23:30:35.338677 update_engine[2003]: I20250706 23:30:35.338616 2003 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 6 23:30:35.338677 update_engine[2003]: I20250706 23:30:35.338630 2003 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 6 23:30:35.338677 update_engine[2003]: I20250706 23:30:35.338644 2003 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 6 23:30:35.338924 update_engine[2003]: I20250706 23:30:35.338749 2003 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 6 23:30:35.338924 update_engine[2003]: I20250706 23:30:35.338788 2003 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 6 23:30:35.338924 update_engine[2003]: I20250706 23:30:35.338804 2003 omaha_request_action.cc:272] Request: Jul 6 23:30:35.338924 update_engine[2003]: Jul 6 23:30:35.338924 update_engine[2003]: Jul 6 23:30:35.338924 update_engine[2003]: Jul 6 23:30:35.338924 update_engine[2003]: Jul 6 23:30:35.338924 update_engine[2003]: Jul 6 23:30:35.338924 update_engine[2003]: Jul 6 23:30:35.338924 update_engine[2003]: I20250706 23:30:35.338820 2003 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:30:35.339436 update_engine[2003]: I20250706 23:30:35.339150 2003 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:30:35.339874 update_engine[2003]: I20250706 23:30:35.339798 2003 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:30:35.340148 locksmithd[2043]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 6 23:30:35.340652 update_engine[2003]: E20250706 23:30:35.340438 2003 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:30:35.340652 update_engine[2003]: I20250706 23:30:35.340513 2003 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 6 23:30:35.340652 update_engine[2003]: I20250706 23:30:35.340530 2003 omaha_request_action.cc:617] Omaha request response: Jul 6 23:30:35.340652 update_engine[2003]: I20250706 23:30:35.340545 2003 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:30:35.340652 update_engine[2003]: I20250706 23:30:35.340559 2003 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:30:35.340652 update_engine[2003]: I20250706 23:30:35.340572 2003 update_attempter.cc:306] Processing Done. Jul 6 23:30:35.340652 update_engine[2003]: I20250706 23:30:35.340586 2003 update_attempter.cc:310] Error event sent. Jul 6 23:30:35.340652 update_engine[2003]: I20250706 23:30:35.340605 2003 update_check_scheduler.cc:74] Next update check in 48m30s Jul 6 23:30:35.341401 locksmithd[2043]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 6 23:30:35.775608 kubelet[3522]: I0706 23:30:35.775248 3522 scope.go:117] "RemoveContainer" containerID="4e1a3989ae7d03f26fdc9d19a51aa7de8373634fdfecd0b3dfe4e394b4fc63a3" Jul 6 23:30:35.779174 containerd[2034]: time="2025-07-06T23:30:35.779103575Z" level=info msg="CreateContainer within sandbox \"8fe8cc33e323e120b7b57ba689128e20c3b79c3e9079c9476211fac62b5baf85\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 6 23:30:35.800969 containerd[2034]: time="2025-07-06T23:30:35.799312499Z" level=info msg="Container 460d6cbf1f106bec6778af364a3e0209ca1cc28ef078ed9a5a569a9856d068e8: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:30:35.809592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3987607372.mount: Deactivated successfully. Jul 6 23:30:35.834617 containerd[2034]: time="2025-07-06T23:30:35.834532439Z" level=info msg="CreateContainer within sandbox \"8fe8cc33e323e120b7b57ba689128e20c3b79c3e9079c9476211fac62b5baf85\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"460d6cbf1f106bec6778af364a3e0209ca1cc28ef078ed9a5a569a9856d068e8\"" Jul 6 23:30:35.835512 containerd[2034]: time="2025-07-06T23:30:35.835449839Z" level=info msg="StartContainer for \"460d6cbf1f106bec6778af364a3e0209ca1cc28ef078ed9a5a569a9856d068e8\"" Jul 6 23:30:35.838275 containerd[2034]: time="2025-07-06T23:30:35.838212431Z" level=info msg="connecting to shim 460d6cbf1f106bec6778af364a3e0209ca1cc28ef078ed9a5a569a9856d068e8" address="unix:///run/containerd/s/454d054bbd101e4ec6c10a231cb8fc3ae52e7115b82ae50865805da579ae7c5b" protocol=ttrpc version=3 Jul 6 23:30:35.878242 systemd[1]: Started cri-containerd-460d6cbf1f106bec6778af364a3e0209ca1cc28ef078ed9a5a569a9856d068e8.scope - libcontainer container 460d6cbf1f106bec6778af364a3e0209ca1cc28ef078ed9a5a569a9856d068e8. Jul 6 23:30:35.962617 containerd[2034]: time="2025-07-06T23:30:35.962481348Z" level=info msg="StartContainer for \"460d6cbf1f106bec6778af364a3e0209ca1cc28ef078ed9a5a569a9856d068e8\" returns successfully" Jul 6 23:30:38.196417 systemd[1]: cri-containerd-bfa8716d801d495b30964a07c6032a55910adedf08180099cf7ebcf5635a64e9.scope: Deactivated successfully. Jul 6 23:30:38.198114 systemd[1]: cri-containerd-bfa8716d801d495b30964a07c6032a55910adedf08180099cf7ebcf5635a64e9.scope: Consumed 3.888s CPU time, 22.4M memory peak, 128K read from disk. Jul 6 23:30:38.202905 containerd[2034]: time="2025-07-06T23:30:38.202631435Z" level=info msg="received exit event container_id:\"bfa8716d801d495b30964a07c6032a55910adedf08180099cf7ebcf5635a64e9\" id:\"bfa8716d801d495b30964a07c6032a55910adedf08180099cf7ebcf5635a64e9\" pid:3176 exit_status:1 exited_at:{seconds:1751844638 nanos:202178051}" Jul 6 23:30:38.204888 containerd[2034]: time="2025-07-06T23:30:38.204788963Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfa8716d801d495b30964a07c6032a55910adedf08180099cf7ebcf5635a64e9\" id:\"bfa8716d801d495b30964a07c6032a55910adedf08180099cf7ebcf5635a64e9\" pid:3176 exit_status:1 exited_at:{seconds:1751844638 nanos:202178051}" Jul 6 23:30:38.260722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfa8716d801d495b30964a07c6032a55910adedf08180099cf7ebcf5635a64e9-rootfs.mount: Deactivated successfully. Jul 6 23:30:38.795506 kubelet[3522]: I0706 23:30:38.795174 3522 scope.go:117] "RemoveContainer" containerID="bfa8716d801d495b30964a07c6032a55910adedf08180099cf7ebcf5635a64e9" Jul 6 23:30:38.800977 containerd[2034]: time="2025-07-06T23:30:38.800840834Z" level=info msg="CreateContainer within sandbox \"06636feeeb843149863b740ec67c2b33f91111e5c9f9533f90bcd68961d40615\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 6 23:30:38.824974 containerd[2034]: time="2025-07-06T23:30:38.822239054Z" level=info msg="Container 73c07f27d515baacd1991f377f89e87667c7165f24ceedd28643bad06fc0882f: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:30:38.849376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1654505122.mount: Deactivated successfully. Jul 6 23:30:38.882778 containerd[2034]: time="2025-07-06T23:30:38.882699350Z" level=info msg="CreateContainer within sandbox \"06636feeeb843149863b740ec67c2b33f91111e5c9f9533f90bcd68961d40615\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"73c07f27d515baacd1991f377f89e87667c7165f24ceedd28643bad06fc0882f\"" Jul 6 23:30:38.883727 containerd[2034]: time="2025-07-06T23:30:38.883687418Z" level=info msg="StartContainer for \"73c07f27d515baacd1991f377f89e87667c7165f24ceedd28643bad06fc0882f\"" Jul 6 23:30:38.886555 containerd[2034]: time="2025-07-06T23:30:38.886486298Z" level=info msg="connecting to shim 73c07f27d515baacd1991f377f89e87667c7165f24ceedd28643bad06fc0882f" address="unix:///run/containerd/s/50b5c199e8c85f1cc393ed90befe7a56ee8cb031c9d11b23acefee61b686afee" protocol=ttrpc version=3 Jul 6 23:30:38.945467 systemd[1]: Started cri-containerd-73c07f27d515baacd1991f377f89e87667c7165f24ceedd28643bad06fc0882f.scope - libcontainer container 73c07f27d515baacd1991f377f89e87667c7165f24ceedd28643bad06fc0882f. Jul 6 23:30:39.048843 containerd[2034]: time="2025-07-06T23:30:39.048218723Z" level=info msg="StartContainer for \"73c07f27d515baacd1991f377f89e87667c7165f24ceedd28643bad06fc0882f\" returns successfully" Jul 6 23:30:44.305711 kubelet[3522]: E0706 23:30:44.305175 3522 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-125?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 6 23:30:46.367153 systemd[1]: cri-containerd-9092e42acbf06f813f84c8b5dc4f2d3018cae83b02b6da83d3de9d22999f0f50.scope: Deactivated successfully. Jul 6 23:30:46.369510 containerd[2034]: time="2025-07-06T23:30:46.368377759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9092e42acbf06f813f84c8b5dc4f2d3018cae83b02b6da83d3de9d22999f0f50\" id:\"9092e42acbf06f813f84c8b5dc4f2d3018cae83b02b6da83d3de9d22999f0f50\" pid:6410 exit_status:1 exited_at:{seconds:1751844646 nanos:367736191}" Jul 6 23:30:46.369510 containerd[2034]: time="2025-07-06T23:30:46.368734255Z" level=info msg="received exit event container_id:\"9092e42acbf06f813f84c8b5dc4f2d3018cae83b02b6da83d3de9d22999f0f50\" id:\"9092e42acbf06f813f84c8b5dc4f2d3018cae83b02b6da83d3de9d22999f0f50\" pid:6410 exit_status:1 exited_at:{seconds:1751844646 nanos:367736191}" Jul 6 23:30:46.409552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9092e42acbf06f813f84c8b5dc4f2d3018cae83b02b6da83d3de9d22999f0f50-rootfs.mount: Deactivated successfully. Jul 6 23:30:46.840862 kubelet[3522]: I0706 23:30:46.839819 3522 scope.go:117] "RemoveContainer" containerID="44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc" Jul 6 23:30:46.842255 kubelet[3522]: I0706 23:30:46.841706 3522 scope.go:117] "RemoveContainer" containerID="9092e42acbf06f813f84c8b5dc4f2d3018cae83b02b6da83d3de9d22999f0f50" Jul 6 23:30:46.842255 kubelet[3522]: E0706 23:30:46.842133 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-k2vfs_tigera-operator(96dd351f-970b-47b6-8968-17d3f4978722)\"" pod="tigera-operator/tigera-operator-747864d56d-k2vfs" podUID="96dd351f-970b-47b6-8968-17d3f4978722" Jul 6 23:30:46.843617 containerd[2034]: time="2025-07-06T23:30:46.843564442Z" level=info msg="RemoveContainer for \"44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc\"" Jul 6 23:30:46.853565 containerd[2034]: time="2025-07-06T23:30:46.853439086Z" level=info msg="RemoveContainer for \"44a67ebe78213c3d0ba615c592ff1a2d7c0d19215bc236b4e20896ae1a2992cc\" returns successfully" Jul 6 23:30:49.450787 containerd[2034]: time="2025-07-06T23:30:49.450685523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b438a51ac7554806cc4594bcc2b57a5942d849e21b39e7db9bfab2619324a143\" id:\"43e756f63083340e1dff0e554888e1cf4ccc3783210578d2b14cd7cb151b4d79\" pid:6549 exit_status:1 exited_at:{seconds:1751844649 nanos:449167259}"