Jul 10 00:02:05.122410 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 10 00:02:05.122455 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Jul 9 22:19:33 -00 2025 Jul 10 00:02:05.122479 kernel: KASLR disabled due to lack of seed Jul 10 00:02:05.122495 kernel: efi: EFI v2.7 by EDK II Jul 10 00:02:05.122510 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Jul 10 00:02:05.122524 kernel: secureboot: Secure boot disabled Jul 10 00:02:05.122541 kernel: ACPI: Early table checksum verification disabled Jul 10 00:02:05.122556 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 10 00:02:05.122571 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 10 00:02:05.122585 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 10 00:02:05.122600 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 10 00:02:05.122619 kernel: ACPI: FACS 0x0000000078630000 000040 Jul 10 00:02:05.122633 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 10 00:02:05.122648 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 10 00:02:05.122665 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 10 00:02:05.122681 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 10 00:02:05.122700 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 10 00:02:05.122716 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 10 00:02:05.122731 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 10 00:02:05.122746 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 10 00:02:05.122762 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 10 00:02:05.122777 kernel: printk: legacy bootconsole [uart0] enabled Jul 10 00:02:05.122792 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 10 00:02:05.122808 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 10 00:02:05.122824 kernel: NODE_DATA(0) allocated [mem 0x4b584cdc0-0x4b5853fff] Jul 10 00:02:05.122839 kernel: Zone ranges: Jul 10 00:02:05.122854 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 10 00:02:05.122873 kernel: DMA32 empty Jul 10 00:02:05.122889 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 10 00:02:05.122904 kernel: Device empty Jul 10 00:02:05.122919 kernel: Movable zone start for each node Jul 10 00:02:05.122934 kernel: Early memory node ranges Jul 10 00:02:05.122949 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 10 00:02:05.122965 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 10 00:02:05.122980 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 10 00:02:05.122996 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 10 00:02:05.123012 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 10 00:02:05.123027 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 10 00:02:05.123043 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 10 00:02:05.123062 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 10 00:02:05.123084 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 10 00:02:05.123100 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 10 00:02:05.123117 kernel: psci: probing for conduit method from ACPI. Jul 10 00:02:05.123133 kernel: psci: PSCIv1.0 detected in firmware. Jul 10 00:02:05.123152 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 00:02:05.123168 kernel: psci: Trusted OS migration not required Jul 10 00:02:05.123184 kernel: psci: SMC Calling Convention v1.1 Jul 10 00:02:05.123201 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 10 00:02:05.123217 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 10 00:02:05.123233 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 10 00:02:05.123249 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 10 00:02:05.123265 kernel: Detected PIPT I-cache on CPU0 Jul 10 00:02:05.123281 kernel: CPU features: detected: GIC system register CPU interface Jul 10 00:02:05.123298 kernel: CPU features: detected: Spectre-v2 Jul 10 00:02:05.123314 kernel: CPU features: detected: Spectre-v3a Jul 10 00:02:05.123333 kernel: CPU features: detected: Spectre-BHB Jul 10 00:02:05.125435 kernel: CPU features: detected: ARM erratum 1742098 Jul 10 00:02:05.125455 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 10 00:02:05.125472 kernel: alternatives: applying boot alternatives Jul 10 00:02:05.125492 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 10 00:02:05.125510 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:02:05.125527 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:02:05.125544 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:02:05.125560 kernel: Fallback order for Node 0: 0 Jul 10 00:02:05.125577 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jul 10 00:02:05.125600 kernel: Policy zone: Normal Jul 10 00:02:05.125617 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:02:05.125633 kernel: software IO TLB: area num 2. Jul 10 00:02:05.125650 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 10 00:02:05.125666 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 10 00:02:05.125682 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:02:05.125699 kernel: rcu: RCU event tracing is enabled. Jul 10 00:02:05.125716 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 10 00:02:05.125733 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:02:05.125749 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:02:05.125765 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:02:05.125782 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 10 00:02:05.125802 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:02:05.125819 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:02:05.125835 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 00:02:05.125851 kernel: GICv3: 96 SPIs implemented Jul 10 00:02:05.125867 kernel: GICv3: 0 Extended SPIs implemented Jul 10 00:02:05.125883 kernel: Root IRQ handler: gic_handle_irq Jul 10 00:02:05.125899 kernel: GICv3: GICv3 features: 16 PPIs Jul 10 00:02:05.125915 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 10 00:02:05.125931 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 10 00:02:05.125947 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 10 00:02:05.125963 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jul 10 00:02:05.125980 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jul 10 00:02:05.126000 kernel: GICv3: using LPI property table @0x0000000400110000 Jul 10 00:02:05.126016 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 10 00:02:05.126032 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jul 10 00:02:05.126048 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:02:05.126065 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 10 00:02:05.126081 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 10 00:02:05.126098 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 10 00:02:05.126115 kernel: Console: colour dummy device 80x25 Jul 10 00:02:05.126131 kernel: printk: legacy console [tty1] enabled Jul 10 00:02:05.126148 kernel: ACPI: Core revision 20240827 Jul 10 00:02:05.126168 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 10 00:02:05.126186 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:02:05.126202 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 00:02:05.126218 kernel: landlock: Up and running. Jul 10 00:02:05.126235 kernel: SELinux: Initializing. Jul 10 00:02:05.126251 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:02:05.126268 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:02:05.126284 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:02:05.126301 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:02:05.126322 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 00:02:05.126367 kernel: Remapping and enabling EFI services. Jul 10 00:02:05.126389 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:02:05.126407 kernel: Detected PIPT I-cache on CPU1 Jul 10 00:02:05.126424 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 10 00:02:05.126441 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jul 10 00:02:05.126458 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 10 00:02:05.126475 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 00:02:05.126492 kernel: SMP: Total of 2 processors activated. Jul 10 00:02:05.126514 kernel: CPU: All CPU(s) started at EL1 Jul 10 00:02:05.126544 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 00:02:05.126563 kernel: CPU features: detected: 32-bit EL1 Support Jul 10 00:02:05.126586 kernel: CPU features: detected: CRC32 instructions Jul 10 00:02:05.126605 kernel: alternatives: applying system-wide alternatives Jul 10 00:02:05.126625 kernel: Memory: 3812964K/4030464K available (11136K kernel code, 2428K rwdata, 9032K rodata, 39488K init, 1035K bss, 212540K reserved, 0K cma-reserved) Jul 10 00:02:05.126643 kernel: devtmpfs: initialized Jul 10 00:02:05.126662 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:02:05.126686 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 10 00:02:05.126704 kernel: 16928 pages in range for non-PLT usage Jul 10 00:02:05.126723 kernel: 508448 pages in range for PLT usage Jul 10 00:02:05.126742 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:02:05.126760 kernel: SMBIOS 3.0.0 present. Jul 10 00:02:05.126779 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 10 00:02:05.126796 kernel: DMI: Memory slots populated: 0/0 Jul 10 00:02:05.126815 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:02:05.126833 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 00:02:05.126857 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 00:02:05.126876 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 00:02:05.126894 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:02:05.126912 kernel: audit: type=2000 audit(0.227:1): state=initialized audit_enabled=0 res=1 Jul 10 00:02:05.126930 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:02:05.126949 kernel: cpuidle: using governor menu Jul 10 00:02:05.126968 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 00:02:05.126987 kernel: ASID allocator initialised with 65536 entries Jul 10 00:02:05.127005 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:02:05.127028 kernel: Serial: AMBA PL011 UART driver Jul 10 00:02:05.127049 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:02:05.127069 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:02:05.127087 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 00:02:05.127105 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 10 00:02:05.127123 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:02:05.127141 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:02:05.127159 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 00:02:05.127178 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 10 00:02:05.127200 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:02:05.127218 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:02:05.127235 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:02:05.127253 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:02:05.127271 kernel: ACPI: Interpreter enabled Jul 10 00:02:05.127289 kernel: ACPI: Using GIC for interrupt routing Jul 10 00:02:05.127307 kernel: ACPI: MCFG table detected, 1 entries Jul 10 00:02:05.127325 kernel: ACPI: CPU0 has been hot-added Jul 10 00:02:05.131402 kernel: ACPI: CPU1 has been hot-added Jul 10 00:02:05.131446 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 10 00:02:05.131744 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:02:05.131935 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 00:02:05.132117 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 00:02:05.132297 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 10 00:02:05.132520 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 10 00:02:05.132546 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 10 00:02:05.132573 kernel: acpiphp: Slot [1] registered Jul 10 00:02:05.132592 kernel: acpiphp: Slot [2] registered Jul 10 00:02:05.132609 kernel: acpiphp: Slot [3] registered Jul 10 00:02:05.132627 kernel: acpiphp: Slot [4] registered Jul 10 00:02:05.132644 kernel: acpiphp: Slot [5] registered Jul 10 00:02:05.132661 kernel: acpiphp: Slot [6] registered Jul 10 00:02:05.132678 kernel: acpiphp: Slot [7] registered Jul 10 00:02:05.132696 kernel: acpiphp: Slot [8] registered Jul 10 00:02:05.132713 kernel: acpiphp: Slot [9] registered Jul 10 00:02:05.132731 kernel: acpiphp: Slot [10] registered Jul 10 00:02:05.132754 kernel: acpiphp: Slot [11] registered Jul 10 00:02:05.132773 kernel: acpiphp: Slot [12] registered Jul 10 00:02:05.132791 kernel: acpiphp: Slot [13] registered Jul 10 00:02:05.132810 kernel: acpiphp: Slot [14] registered Jul 10 00:02:05.132829 kernel: acpiphp: Slot [15] registered Jul 10 00:02:05.132847 kernel: acpiphp: Slot [16] registered Jul 10 00:02:05.132865 kernel: acpiphp: Slot [17] registered Jul 10 00:02:05.132884 kernel: acpiphp: Slot [18] registered Jul 10 00:02:05.132903 kernel: acpiphp: Slot [19] registered Jul 10 00:02:05.132926 kernel: acpiphp: Slot [20] registered Jul 10 00:02:05.132945 kernel: acpiphp: Slot [21] registered Jul 10 00:02:05.132963 kernel: acpiphp: Slot [22] registered Jul 10 00:02:05.132981 kernel: acpiphp: Slot [23] registered Jul 10 00:02:05.133000 kernel: acpiphp: Slot [24] registered Jul 10 00:02:05.133019 kernel: acpiphp: Slot [25] registered Jul 10 00:02:05.133038 kernel: acpiphp: Slot [26] registered Jul 10 00:02:05.133056 kernel: acpiphp: Slot [27] registered Jul 10 00:02:05.133075 kernel: acpiphp: Slot [28] registered Jul 10 00:02:05.133096 kernel: acpiphp: Slot [29] registered Jul 10 00:02:05.133114 kernel: acpiphp: Slot [30] registered Jul 10 00:02:05.133132 kernel: acpiphp: Slot [31] registered Jul 10 00:02:05.133149 kernel: PCI host bridge to bus 0000:00 Jul 10 00:02:05.134254 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 10 00:02:05.134516 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 00:02:05.134721 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 10 00:02:05.134983 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 10 00:02:05.135227 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jul 10 00:02:05.135488 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jul 10 00:02:05.135680 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jul 10 00:02:05.135881 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jul 10 00:02:05.136068 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jul 10 00:02:05.136260 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 10 00:02:05.136533 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jul 10 00:02:05.136722 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jul 10 00:02:05.136907 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jul 10 00:02:05.137091 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jul 10 00:02:05.137275 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 10 00:02:05.137510 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Jul 10 00:02:05.137698 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Jul 10 00:02:05.137890 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Jul 10 00:02:05.138080 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Jul 10 00:02:05.138271 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Jul 10 00:02:05.138471 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 10 00:02:05.138638 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 00:02:05.138801 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 10 00:02:05.138826 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 00:02:05.138852 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 00:02:05.138870 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 00:02:05.138888 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 00:02:05.138905 kernel: iommu: Default domain type: Translated Jul 10 00:02:05.138922 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 00:02:05.138939 kernel: efivars: Registered efivars operations Jul 10 00:02:05.138957 kernel: vgaarb: loaded Jul 10 00:02:05.138975 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 00:02:05.138992 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:02:05.139013 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:02:05.139031 kernel: pnp: PnP ACPI init Jul 10 00:02:05.139228 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 10 00:02:05.139254 kernel: pnp: PnP ACPI: found 1 devices Jul 10 00:02:05.139272 kernel: NET: Registered PF_INET protocol family Jul 10 00:02:05.139289 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:02:05.139307 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:02:05.139325 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:02:05.139362 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:02:05.139388 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 00:02:05.139406 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:02:05.139423 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:02:05.139441 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:02:05.139458 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:02:05.139476 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:02:05.139493 kernel: kvm [1]: HYP mode not available Jul 10 00:02:05.139510 kernel: Initialise system trusted keyrings Jul 10 00:02:05.139527 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:02:05.139549 kernel: Key type asymmetric registered Jul 10 00:02:05.139566 kernel: Asymmetric key parser 'x509' registered Jul 10 00:02:05.139583 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 10 00:02:05.139600 kernel: io scheduler mq-deadline registered Jul 10 00:02:05.139617 kernel: io scheduler kyber registered Jul 10 00:02:05.139635 kernel: io scheduler bfq registered Jul 10 00:02:05.139828 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 10 00:02:05.139854 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 00:02:05.139877 kernel: ACPI: button: Power Button [PWRB] Jul 10 00:02:05.139895 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 10 00:02:05.139913 kernel: ACPI: button: Sleep Button [SLPB] Jul 10 00:02:05.139930 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:02:05.139948 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 10 00:02:05.140130 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 10 00:02:05.140155 kernel: printk: legacy console [ttyS0] disabled Jul 10 00:02:05.140172 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 10 00:02:05.140190 kernel: printk: legacy console [ttyS0] enabled Jul 10 00:02:05.140212 kernel: printk: legacy bootconsole [uart0] disabled Jul 10 00:02:05.140230 kernel: thunder_xcv, ver 1.0 Jul 10 00:02:05.140247 kernel: thunder_bgx, ver 1.0 Jul 10 00:02:05.140264 kernel: nicpf, ver 1.0 Jul 10 00:02:05.140281 kernel: nicvf, ver 1.0 Jul 10 00:02:05.140506 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 00:02:05.140729 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T00:02:04 UTC (1752105724) Jul 10 00:02:05.140757 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 00:02:05.140781 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jul 10 00:02:05.140799 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:02:05.140817 kernel: watchdog: NMI not fully supported Jul 10 00:02:05.140834 kernel: watchdog: Hard watchdog permanently disabled Jul 10 00:02:05.140852 kernel: Segment Routing with IPv6 Jul 10 00:02:05.140870 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:02:05.140887 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:02:05.140904 kernel: Key type dns_resolver registered Jul 10 00:02:05.140922 kernel: registered taskstats version 1 Jul 10 00:02:05.140943 kernel: Loading compiled-in X.509 certificates Jul 10 00:02:05.140961 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 11eff9deb028731c4f89f27f6fac8d1c08902e5a' Jul 10 00:02:05.140978 kernel: Demotion targets for Node 0: null Jul 10 00:02:05.140996 kernel: Key type .fscrypt registered Jul 10 00:02:05.141013 kernel: Key type fscrypt-provisioning registered Jul 10 00:02:05.141030 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:02:05.141047 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:02:05.141064 kernel: ima: No architecture policies found Jul 10 00:02:05.141082 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 00:02:05.141103 kernel: clk: Disabling unused clocks Jul 10 00:02:05.141121 kernel: PM: genpd: Disabling unused power domains Jul 10 00:02:05.141138 kernel: Warning: unable to open an initial console. Jul 10 00:02:05.141156 kernel: Freeing unused kernel memory: 39488K Jul 10 00:02:05.141173 kernel: Run /init as init process Jul 10 00:02:05.141191 kernel: with arguments: Jul 10 00:02:05.141211 kernel: /init Jul 10 00:02:05.141231 kernel: with environment: Jul 10 00:02:05.141248 kernel: HOME=/ Jul 10 00:02:05.141270 kernel: TERM=linux Jul 10 00:02:05.141288 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:02:05.141307 systemd[1]: Successfully made /usr/ read-only. Jul 10 00:02:05.141331 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:02:05.141407 systemd[1]: Detected virtualization amazon. Jul 10 00:02:05.141427 systemd[1]: Detected architecture arm64. Jul 10 00:02:05.141447 systemd[1]: Running in initrd. Jul 10 00:02:05.141465 systemd[1]: No hostname configured, using default hostname. Jul 10 00:02:05.141492 systemd[1]: Hostname set to . Jul 10 00:02:05.141511 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:02:05.141530 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:02:05.141549 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:02:05.141568 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:02:05.141589 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:02:05.141609 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:02:05.141628 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:02:05.141654 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:02:05.141676 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:02:05.141695 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:02:05.141715 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:02:05.141734 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:02:05.141753 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:02:05.141777 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:02:05.141796 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:02:05.141815 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:02:05.141835 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:02:05.141854 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:02:05.141874 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:02:05.141893 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 00:02:05.141913 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:02:05.141933 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:02:05.141956 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:02:05.141976 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:02:05.141996 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:02:05.142015 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:02:05.142035 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:02:05.142055 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 00:02:05.142075 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:02:05.142094 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:02:05.142117 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:02:05.142137 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:02:05.142156 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:02:05.142177 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:02:05.142196 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:02:05.142221 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:02:05.142240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:02:05.142299 systemd-journald[258]: Collecting audit messages is disabled. Jul 10 00:02:05.142383 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:02:05.142419 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:02:05.142455 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:02:05.142481 systemd-journald[258]: Journal started Jul 10 00:02:05.142518 systemd-journald[258]: Runtime Journal (/run/log/journal/ec262c4a4f7c93937486a92d6912f8fc) is 8M, max 75.3M, 67.3M free. Jul 10 00:02:05.099397 systemd-modules-load[259]: Inserted module 'overlay' Jul 10 00:02:05.151840 systemd-modules-load[259]: Inserted module 'br_netfilter' Jul 10 00:02:05.153932 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:02:05.153972 kernel: Bridge firewalling registered Jul 10 00:02:05.158509 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:02:05.167192 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:02:05.175612 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:02:05.186642 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:02:05.218032 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:02:05.226270 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:02:05.234717 systemd-tmpfiles[280]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 00:02:05.242425 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:02:05.248051 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:02:05.253467 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:02:05.277094 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:02:05.308570 dracut-cmdline[296]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 10 00:02:05.373046 systemd-resolved[298]: Positive Trust Anchors: Jul 10 00:02:05.373080 systemd-resolved[298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:02:05.373143 systemd-resolved[298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:02:05.481386 kernel: SCSI subsystem initialized Jul 10 00:02:05.489396 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:02:05.502389 kernel: iscsi: registered transport (tcp) Jul 10 00:02:05.523664 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:02:05.523737 kernel: QLogic iSCSI HBA Driver Jul 10 00:02:05.558529 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:02:05.585899 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:02:05.591163 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:02:05.633386 kernel: random: crng init done Jul 10 00:02:05.634688 systemd-resolved[298]: Defaulting to hostname 'linux'. Jul 10 00:02:05.637531 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:02:05.645657 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:02:05.692389 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:02:05.698039 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:02:05.784386 kernel: raid6: neonx8 gen() 6575 MB/s Jul 10 00:02:05.801382 kernel: raid6: neonx4 gen() 6610 MB/s Jul 10 00:02:05.818374 kernel: raid6: neonx2 gen() 5482 MB/s Jul 10 00:02:05.835375 kernel: raid6: neonx1 gen() 3958 MB/s Jul 10 00:02:05.852374 kernel: raid6: int64x8 gen() 3681 MB/s Jul 10 00:02:05.869378 kernel: raid6: int64x4 gen() 3729 MB/s Jul 10 00:02:05.886374 kernel: raid6: int64x2 gen() 3621 MB/s Jul 10 00:02:05.904335 kernel: raid6: int64x1 gen() 2767 MB/s Jul 10 00:02:05.904391 kernel: raid6: using algorithm neonx4 gen() 6610 MB/s Jul 10 00:02:05.922326 kernel: raid6: .... xor() 4657 MB/s, rmw enabled Jul 10 00:02:05.922375 kernel: raid6: using neon recovery algorithm Jul 10 00:02:05.930926 kernel: xor: measuring software checksum speed Jul 10 00:02:05.930987 kernel: 8regs : 12916 MB/sec Jul 10 00:02:05.932131 kernel: 32regs : 13001 MB/sec Jul 10 00:02:05.933388 kernel: arm64_neon : 8769 MB/sec Jul 10 00:02:05.933419 kernel: xor: using function: 32regs (13001 MB/sec) Jul 10 00:02:06.025400 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:02:06.036909 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:02:06.046633 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:02:06.099701 systemd-udevd[507]: Using default interface naming scheme 'v255'. Jul 10 00:02:06.111659 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:02:06.116580 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:02:06.155798 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Jul 10 00:02:06.200988 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:02:06.205499 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:02:06.346453 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:02:06.357104 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:02:06.497163 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 00:02:06.497240 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 10 00:02:06.510216 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 10 00:02:06.510568 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 10 00:02:06.533421 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:47:bd:da:50:3d Jul 10 00:02:06.541180 (udev-worker)[560]: Network interface NamePolicy= disabled on kernel command line. Jul 10 00:02:06.546470 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 10 00:02:06.546509 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 10 00:02:06.557394 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 10 00:02:06.561129 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:02:06.561633 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:02:06.568838 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:02:06.575045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:02:06.585741 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:02:06.585778 kernel: GPT:9289727 != 16777215 Jul 10 00:02:06.585801 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:02:06.585824 kernel: GPT:9289727 != 16777215 Jul 10 00:02:06.585846 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:02:06.585879 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:02:06.594013 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:02:06.623610 kernel: nvme nvme0: using unchecked data buffer Jul 10 00:02:06.623912 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:02:06.712863 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 10 00:02:06.779776 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 10 00:02:06.830593 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 10 00:02:06.836982 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 10 00:02:06.868195 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 10 00:02:06.890558 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:02:06.898978 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:02:06.903280 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:02:06.911032 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:02:06.917739 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:02:06.924864 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:02:06.953948 disk-uuid[684]: Primary Header is updated. Jul 10 00:02:06.953948 disk-uuid[684]: Secondary Entries is updated. Jul 10 00:02:06.953948 disk-uuid[684]: Secondary Header is updated. Jul 10 00:02:06.963424 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:02:06.971409 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:02:07.992515 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:02:07.994730 disk-uuid[686]: The operation has completed successfully. Jul 10 00:02:08.161237 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:02:08.163397 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:02:08.267375 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:02:08.291807 sh[952]: Success Jul 10 00:02:08.376608 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:02:08.376684 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:02:08.376710 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 00:02:08.389394 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 10 00:02:08.703718 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:02:08.711770 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:02:08.730055 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:02:08.753396 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 00:02:08.756358 kernel: BTRFS: device fsid 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (975) Jul 10 00:02:08.760836 kernel: BTRFS info (device dm-0): first mount of filesystem 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b Jul 10 00:02:08.760885 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:02:08.762085 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 00:02:08.856532 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:02:08.860840 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:02:08.865669 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:02:08.871006 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:02:08.882690 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:02:08.934422 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1009) Jul 10 00:02:08.939465 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 10 00:02:08.939544 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:02:08.941445 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:02:08.966415 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 10 00:02:08.970023 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:02:08.981399 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:02:09.050057 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:02:09.059586 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:02:09.125137 systemd-networkd[1144]: lo: Link UP Jul 10 00:02:09.125613 systemd-networkd[1144]: lo: Gained carrier Jul 10 00:02:09.129122 systemd-networkd[1144]: Enumeration completed Jul 10 00:02:09.129686 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:02:09.130586 systemd-networkd[1144]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:02:09.130594 systemd-networkd[1144]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:02:09.135694 systemd[1]: Reached target network.target - Network. Jul 10 00:02:09.145745 systemd-networkd[1144]: eth0: Link UP Jul 10 00:02:09.145752 systemd-networkd[1144]: eth0: Gained carrier Jul 10 00:02:09.145773 systemd-networkd[1144]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:02:09.171412 systemd-networkd[1144]: eth0: DHCPv4 address 172.31.25.230/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 10 00:02:09.575720 ignition[1089]: Ignition 2.21.0 Jul 10 00:02:09.578276 ignition[1089]: Stage: fetch-offline Jul 10 00:02:09.579189 ignition[1089]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:02:09.579219 ignition[1089]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 00:02:09.593943 ignition[1089]: Ignition finished successfully Jul 10 00:02:09.599030 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:02:09.600576 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 10 00:02:09.646123 ignition[1159]: Ignition 2.21.0 Jul 10 00:02:09.646158 ignition[1159]: Stage: fetch Jul 10 00:02:09.647127 ignition[1159]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:02:09.647505 ignition[1159]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 00:02:09.648233 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 00:02:09.668450 ignition[1159]: PUT result: OK Jul 10 00:02:09.672000 ignition[1159]: parsed url from cmdline: "" Jul 10 00:02:09.672022 ignition[1159]: no config URL provided Jul 10 00:02:09.672041 ignition[1159]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:02:09.672094 ignition[1159]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:02:09.672127 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 00:02:09.676434 ignition[1159]: PUT result: OK Jul 10 00:02:09.680717 ignition[1159]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 10 00:02:09.685423 ignition[1159]: GET result: OK Jul 10 00:02:09.685640 ignition[1159]: parsing config with SHA512: 5d36738edfa12da62f60a9b47b6dda1baa4061bb9abfef931cf1999c8633c766de792c5addd482e79be22c2e72dd89a9f9966e79e9538ef3a64292c6138e3454 Jul 10 00:02:09.701018 unknown[1159]: fetched base config from "system" Jul 10 00:02:09.701039 unknown[1159]: fetched base config from "system" Jul 10 00:02:09.701937 ignition[1159]: fetch: fetch complete Jul 10 00:02:09.701052 unknown[1159]: fetched user config from "aws" Jul 10 00:02:09.701950 ignition[1159]: fetch: fetch passed Jul 10 00:02:09.702061 ignition[1159]: Ignition finished successfully Jul 10 00:02:09.716388 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 10 00:02:09.722562 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:02:09.763531 ignition[1166]: Ignition 2.21.0 Jul 10 00:02:09.764042 ignition[1166]: Stage: kargs Jul 10 00:02:09.764614 ignition[1166]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:02:09.764638 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 00:02:09.764801 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 00:02:09.775673 ignition[1166]: PUT result: OK Jul 10 00:02:09.783626 ignition[1166]: kargs: kargs passed Jul 10 00:02:09.783733 ignition[1166]: Ignition finished successfully Jul 10 00:02:09.789073 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:02:09.796533 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:02:09.834279 ignition[1173]: Ignition 2.21.0 Jul 10 00:02:09.834310 ignition[1173]: Stage: disks Jul 10 00:02:09.834935 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:02:09.834959 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 00:02:09.835100 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 00:02:09.840087 ignition[1173]: PUT result: OK Jul 10 00:02:09.849857 ignition[1173]: disks: disks passed Jul 10 00:02:09.850161 ignition[1173]: Ignition finished successfully Jul 10 00:02:09.855535 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:02:09.856037 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:02:09.862568 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:02:09.867334 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:02:09.872100 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:02:09.876220 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:02:09.883423 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:02:09.943214 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 10 00:02:09.947806 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:02:09.956485 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:02:10.083375 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 961fd3ec-635c-4a87-8aef-ca8f12cd8be8 r/w with ordered data mode. Quota mode: none. Jul 10 00:02:10.085071 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:02:10.089186 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:02:10.095830 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:02:10.107209 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:02:10.113801 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 00:02:10.113914 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:02:10.113963 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:02:10.145403 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Jul 10 00:02:10.150911 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 10 00:02:10.150972 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:02:10.152330 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:02:10.151372 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:02:10.157223 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:02:10.166613 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:02:10.325075 initrd-setup-root[1226]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:02:10.335547 initrd-setup-root[1233]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:02:10.344375 initrd-setup-root[1240]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:02:10.354397 initrd-setup-root[1247]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:02:10.614758 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:02:10.621746 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:02:10.628675 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:02:10.654288 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:02:10.657618 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 10 00:02:10.692922 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:02:10.704499 ignition[1314]: INFO : Ignition 2.21.0 Jul 10 00:02:10.704499 ignition[1314]: INFO : Stage: mount Jul 10 00:02:10.704499 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:02:10.704499 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 00:02:10.704499 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 00:02:10.716047 ignition[1314]: INFO : PUT result: OK Jul 10 00:02:10.729386 ignition[1314]: INFO : mount: mount passed Jul 10 00:02:10.734114 ignition[1314]: INFO : Ignition finished successfully Jul 10 00:02:10.737078 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:02:10.744249 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:02:10.835544 systemd-networkd[1144]: eth0: Gained IPv6LL Jul 10 00:02:11.087797 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:02:11.138393 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1327) Jul 10 00:02:11.143683 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 10 00:02:11.143729 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:02:11.143755 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:02:11.153263 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:02:11.196871 ignition[1344]: INFO : Ignition 2.21.0 Jul 10 00:02:11.196871 ignition[1344]: INFO : Stage: files Jul 10 00:02:11.200460 ignition[1344]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:02:11.202626 ignition[1344]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 00:02:11.202626 ignition[1344]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 00:02:11.208600 ignition[1344]: INFO : PUT result: OK Jul 10 00:02:11.218646 ignition[1344]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:02:11.226675 ignition[1344]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:02:11.226675 ignition[1344]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:02:11.238332 ignition[1344]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:02:11.241612 ignition[1344]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:02:11.245905 unknown[1344]: wrote ssh authorized keys file for user: core Jul 10 00:02:11.248434 ignition[1344]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:02:11.253120 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 10 00:02:11.253120 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 10 00:02:11.339833 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:02:11.623236 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 10 00:02:11.623236 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:02:11.631604 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:02:11.631604 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:02:11.631604 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:02:11.631604 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:02:11.631604 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:02:11.631604 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:02:11.631604 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:02:11.631604 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:02:11.631604 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:02:11.631604 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 00:02:11.669829 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 00:02:11.669829 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 00:02:11.669829 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 10 00:02:12.339181 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 10 00:02:12.703568 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 00:02:12.703568 ignition[1344]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 10 00:02:12.712710 ignition[1344]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:02:12.712710 ignition[1344]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:02:12.712710 ignition[1344]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 10 00:02:12.712710 ignition[1344]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:02:12.712710 ignition[1344]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:02:12.712710 ignition[1344]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:02:12.712710 ignition[1344]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:02:12.712710 ignition[1344]: INFO : files: files passed Jul 10 00:02:12.712710 ignition[1344]: INFO : Ignition finished successfully Jul 10 00:02:12.713511 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:02:12.736918 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:02:12.750072 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:02:12.791253 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:02:12.791512 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:02:12.805987 initrd-setup-root-after-ignition[1374]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:02:12.809847 initrd-setup-root-after-ignition[1374]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:02:12.813325 initrd-setup-root-after-ignition[1378]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:02:12.818680 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:02:12.825691 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:02:12.833578 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:02:12.900483 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:02:12.901062 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:02:12.906096 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:02:12.908488 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:02:12.910994 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:02:12.918123 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:02:12.958572 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:02:12.966030 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:02:13.004707 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:02:13.010011 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:02:13.015230 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:02:13.023196 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:02:13.025205 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:02:13.030776 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:02:13.033588 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:02:13.039600 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:02:13.044527 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:02:13.047608 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:02:13.055846 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:02:13.060646 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:02:13.064039 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:02:13.069222 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:02:13.073766 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:02:13.080092 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:02:13.082636 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:02:13.082866 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:02:13.090885 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:02:13.093595 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:02:13.095836 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:02:13.100552 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:02:13.105986 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:02:13.106203 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:02:13.114917 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:02:13.115333 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:02:13.123517 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:02:13.123909 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:02:13.131809 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:02:13.142149 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:02:13.148502 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:02:13.149078 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:02:13.156552 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:02:13.158020 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:02:13.173086 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:02:13.174297 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:02:13.202668 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:02:13.216269 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:02:13.222841 ignition[1398]: INFO : Ignition 2.21.0 Jul 10 00:02:13.222841 ignition[1398]: INFO : Stage: umount Jul 10 00:02:13.222841 ignition[1398]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:02:13.222841 ignition[1398]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 10 00:02:13.222841 ignition[1398]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 10 00:02:13.221920 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:02:13.240251 ignition[1398]: INFO : PUT result: OK Jul 10 00:02:13.248070 ignition[1398]: INFO : umount: umount passed Jul 10 00:02:13.248070 ignition[1398]: INFO : Ignition finished successfully Jul 10 00:02:13.251622 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:02:13.254392 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:02:13.259437 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:02:13.259540 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:02:13.262556 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:02:13.262646 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:02:13.271834 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 10 00:02:13.271922 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 10 00:02:13.274697 systemd[1]: Stopped target network.target - Network. Jul 10 00:02:13.280432 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:02:13.280524 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:02:13.283316 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:02:13.289303 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:02:13.295691 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:02:13.298632 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:02:13.301020 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:02:13.308197 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:02:13.308272 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:02:13.310761 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:02:13.310830 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:02:13.314650 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:02:13.314862 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:02:13.316439 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:02:13.316518 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:02:13.322673 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:02:13.322758 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:02:13.325240 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:02:13.327697 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:02:13.349225 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:02:13.351647 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:02:13.361676 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 00:02:13.362091 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:02:13.362326 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:02:13.379320 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 00:02:13.380854 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 00:02:13.387490 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:02:13.387568 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:02:13.415473 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:02:13.425416 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:02:13.425531 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:02:13.433574 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:02:13.433669 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:02:13.447898 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:02:13.448133 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:02:13.455426 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:02:13.455519 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:02:13.463587 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:02:13.471927 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:02:13.475532 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:02:13.489600 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:02:13.498013 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:02:13.502255 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:02:13.502372 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:02:13.511465 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:02:13.511541 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:02:13.514026 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:02:13.514115 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:02:13.521068 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:02:13.521155 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:02:13.528073 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:02:13.528168 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:02:13.537863 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:02:13.546549 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 00:02:13.546690 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:02:13.551964 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:02:13.552071 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:02:13.557791 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 10 00:02:13.565096 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:02:13.568912 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:02:13.569007 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:02:13.578868 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:02:13.578976 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:02:13.588488 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 10 00:02:13.588624 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 10 00:02:13.588707 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 00:02:13.588793 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:02:13.589613 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:02:13.590053 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:02:13.609125 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:02:13.609293 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:02:13.613766 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:02:13.620581 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:02:13.650613 systemd[1]: Switching root. Jul 10 00:02:13.696813 systemd-journald[258]: Journal stopped Jul 10 00:02:15.751842 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Jul 10 00:02:15.751965 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:02:15.752007 kernel: SELinux: policy capability open_perms=1 Jul 10 00:02:15.752037 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:02:15.752066 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:02:15.752096 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:02:15.752125 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:02:15.752153 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:02:15.752182 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:02:15.752212 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 00:02:15.752243 kernel: audit: type=1403 audit(1752105734.028:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:02:15.752276 systemd[1]: Successfully loaded SELinux policy in 60.098ms. Jul 10 00:02:15.752325 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.442ms. Jul 10 00:02:15.752388 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:02:15.752420 systemd[1]: Detected virtualization amazon. Jul 10 00:02:15.752451 systemd[1]: Detected architecture arm64. Jul 10 00:02:15.752481 systemd[1]: Detected first boot. Jul 10 00:02:15.752512 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:02:15.752555 zram_generator::config[1444]: No configuration found. Jul 10 00:02:15.752590 kernel: NET: Registered PF_VSOCK protocol family Jul 10 00:02:15.752619 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:02:15.752652 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 00:02:15.752682 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:02:15.752713 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:02:15.752747 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:02:15.752778 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:02:15.752812 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:02:15.752840 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:02:15.752869 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:02:15.752900 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:02:15.752931 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:02:15.752959 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:02:15.752987 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:02:15.753027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:02:15.753055 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:02:15.753086 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:02:15.753116 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:02:15.753146 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:02:15.753176 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:02:15.753204 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 00:02:15.753232 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:02:15.753261 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:02:15.753289 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:02:15.753321 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:02:15.753393 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:02:15.753427 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:02:15.753456 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:02:15.753487 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:02:15.753516 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:02:15.755770 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:02:15.755802 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:02:15.755831 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:02:15.755866 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 00:02:15.755894 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:02:15.755924 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:02:15.755955 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:02:15.755991 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:02:15.756019 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:02:15.756046 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:02:15.756076 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:02:15.756104 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:02:15.756138 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:02:15.756170 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:02:15.756205 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:02:15.756237 systemd[1]: Reached target machines.target - Containers. Jul 10 00:02:15.756265 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:02:15.756293 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:02:15.756320 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:02:15.756380 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:02:15.756417 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:02:15.756446 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:02:15.756476 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:02:15.756504 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:02:15.756534 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:02:15.756563 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:02:15.756593 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:02:15.756622 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:02:15.756654 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:02:15.756686 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:02:15.756715 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:02:15.756743 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:02:15.756771 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:02:15.756799 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:02:15.756830 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:02:15.756859 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 00:02:15.756891 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:02:15.756928 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:02:15.756958 systemd[1]: Stopped verity-setup.service. Jul 10 00:02:15.756989 kernel: loop: module loaded Jul 10 00:02:15.757018 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:02:15.757047 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:02:15.757074 kernel: fuse: init (API version 7.41) Jul 10 00:02:15.757100 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:02:15.757128 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:02:15.757156 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:02:15.757183 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:02:15.757214 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:02:15.757245 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:02:15.757276 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:02:15.757305 kernel: ACPI: bus type drm_connector registered Jul 10 00:02:15.757332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:02:15.757436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:02:15.757466 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:02:15.757495 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:02:15.757522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:02:15.757550 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:02:15.757584 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:02:15.757612 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:02:15.757639 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:02:15.757668 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:02:15.757696 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:02:15.757726 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:02:15.757755 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:02:15.757782 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:02:15.757816 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:02:15.757845 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:02:15.757872 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:02:15.757903 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:02:15.757934 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:02:15.757966 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:02:15.757997 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:02:15.758026 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:02:15.758054 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 00:02:15.758083 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:02:15.758111 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:02:15.758139 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:02:15.758168 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:02:15.758246 systemd-journald[1523]: Collecting audit messages is disabled. Jul 10 00:02:15.758296 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:02:15.758326 systemd-journald[1523]: Journal started Jul 10 00:02:15.758408 systemd-journald[1523]: Runtime Journal (/run/log/journal/ec262c4a4f7c93937486a92d6912f8fc) is 8M, max 75.3M, 67.3M free. Jul 10 00:02:15.036639 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:02:15.052039 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 10 00:02:15.052883 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:02:15.765729 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:02:15.783555 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:02:15.783642 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:02:15.788411 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 00:02:15.836387 kernel: loop0: detected capacity change from 0 to 61240 Jul 10 00:02:15.854561 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:02:15.910123 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:02:15.929412 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:02:15.935809 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:02:15.940604 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:02:15.951373 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 00:02:15.984035 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:02:15.991134 systemd-tmpfiles[1545]: ACLs are not supported, ignoring. Jul 10 00:02:15.992879 systemd-tmpfiles[1545]: ACLs are not supported, ignoring. Jul 10 00:02:16.020117 systemd-journald[1523]: Time spent on flushing to /var/log/journal/ec262c4a4f7c93937486a92d6912f8fc is 67.400ms for 937 entries. Jul 10 00:02:16.020117 systemd-journald[1523]: System Journal (/var/log/journal/ec262c4a4f7c93937486a92d6912f8fc) is 8M, max 195.6M, 187.6M free. Jul 10 00:02:16.131507 systemd-journald[1523]: Received client request to flush runtime journal. Jul 10 00:02:16.131643 kernel: loop1: detected capacity change from 0 to 207008 Jul 10 00:02:16.022297 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:02:16.037873 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:02:16.050477 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:02:16.057933 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:02:16.060895 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 00:02:16.137987 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:02:16.186188 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:02:16.193730 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:02:16.199394 kernel: loop2: detected capacity change from 0 to 138376 Jul 10 00:02:16.245780 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Jul 10 00:02:16.245823 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Jul 10 00:02:16.255509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:02:16.334381 kernel: loop3: detected capacity change from 0 to 107312 Jul 10 00:02:16.464375 kernel: loop4: detected capacity change from 0 to 61240 Jul 10 00:02:16.487412 kernel: loop5: detected capacity change from 0 to 207008 Jul 10 00:02:16.534373 kernel: loop6: detected capacity change from 0 to 138376 Jul 10 00:02:16.568309 kernel: loop7: detected capacity change from 0 to 107312 Jul 10 00:02:16.595886 (sd-merge)[1604]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 10 00:02:16.600673 (sd-merge)[1604]: Merged extensions into '/usr'. Jul 10 00:02:16.612266 systemd[1]: Reload requested from client PID 1553 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:02:16.612295 systemd[1]: Reloading... Jul 10 00:02:16.811418 zram_generator::config[1636]: No configuration found. Jul 10 00:02:16.833375 ldconfig[1549]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:02:17.028990 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:02:17.218022 systemd[1]: Reloading finished in 604 ms. Jul 10 00:02:17.242411 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:02:17.245534 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:02:17.250248 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:02:17.265484 systemd[1]: Starting ensure-sysext.service... Jul 10 00:02:17.271603 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:02:17.277293 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:02:17.310951 systemd[1]: Reload requested from client PID 1683 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:02:17.311399 systemd[1]: Reloading... Jul 10 00:02:17.332932 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 00:02:17.333514 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 00:02:17.334821 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:02:17.335359 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:02:17.338114 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:02:17.338755 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Jul 10 00:02:17.339143 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Jul 10 00:02:17.348494 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:02:17.348527 systemd-tmpfiles[1684]: Skipping /boot Jul 10 00:02:17.378506 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:02:17.378534 systemd-tmpfiles[1684]: Skipping /boot Jul 10 00:02:17.427290 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Jul 10 00:02:17.510380 zram_generator::config[1721]: No configuration found. Jul 10 00:02:17.770581 (udev-worker)[1727]: Network interface NamePolicy= disabled on kernel command line. Jul 10 00:02:17.874151 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:02:18.165167 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 00:02:18.166046 systemd[1]: Reloading finished in 853 ms. Jul 10 00:02:18.250421 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:02:18.294910 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:02:18.380470 systemd[1]: Finished ensure-sysext.service. Jul 10 00:02:18.409739 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:02:18.415914 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:02:18.418848 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:02:18.421784 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:02:18.429798 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:02:18.434949 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:02:18.443713 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:02:18.446314 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:02:18.446432 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:02:18.458693 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:02:18.468593 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:02:18.477761 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:02:18.480228 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:02:18.486137 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:02:18.495154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:02:18.607001 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:02:18.610642 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:02:18.613272 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:02:18.616609 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:02:18.617634 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:02:18.620971 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:02:18.621312 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:02:18.624322 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:02:18.633279 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:02:18.636524 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:02:18.675637 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:02:18.675917 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:02:18.680745 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:02:18.698475 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:02:18.717225 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:02:18.720611 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:02:18.753517 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:02:18.810404 augenrules[1943]: No rules Jul 10 00:02:18.823786 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:02:18.826243 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:02:18.841524 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 10 00:02:18.845038 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:02:18.852635 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:02:18.871442 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:02:18.901763 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:02:19.009084 systemd-networkd[1885]: lo: Link UP Jul 10 00:02:19.009106 systemd-networkd[1885]: lo: Gained carrier Jul 10 00:02:19.012133 systemd-networkd[1885]: Enumeration completed Jul 10 00:02:19.012369 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:02:19.013180 systemd-networkd[1885]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:02:19.013202 systemd-networkd[1885]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:02:19.015431 systemd-resolved[1889]: Positive Trust Anchors: Jul 10 00:02:19.015898 systemd-resolved[1889]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:02:19.015962 systemd-resolved[1889]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:02:19.018597 systemd-networkd[1885]: eth0: Link UP Jul 10 00:02:19.018861 systemd-networkd[1885]: eth0: Gained carrier Jul 10 00:02:19.018897 systemd-networkd[1885]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:02:19.020781 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 00:02:19.029658 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:02:19.041445 systemd-networkd[1885]: eth0: DHCPv4 address 172.31.25.230/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 10 00:02:19.044936 systemd-resolved[1889]: Defaulting to hostname 'linux'. Jul 10 00:02:19.048935 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:02:19.051894 systemd[1]: Reached target network.target - Network. Jul 10 00:02:19.055530 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:02:19.058211 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:02:19.060654 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:02:19.066012 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:02:19.069049 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:02:19.072941 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:02:19.075762 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:02:19.079195 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:02:19.079245 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:02:19.081152 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:02:19.086140 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:02:19.092450 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:02:19.101151 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 00:02:19.104271 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 00:02:19.107020 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 00:02:19.118554 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:02:19.121505 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 00:02:19.127383 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 00:02:19.130753 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:02:19.133966 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:02:19.136930 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:02:19.139240 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:02:19.139415 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:02:19.141662 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:02:19.148884 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 10 00:02:19.156844 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:02:19.165189 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:02:19.170532 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:02:19.178795 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:02:19.181553 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:02:19.187795 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:02:19.197770 systemd[1]: Started ntpd.service - Network Time Service. Jul 10 00:02:19.209322 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:02:19.221925 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 10 00:02:19.226541 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:02:19.239185 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:02:19.251894 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:02:19.256005 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:02:19.257959 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:02:19.260955 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:02:19.274656 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:02:19.285436 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:02:19.296455 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:02:19.299463 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:02:19.332074 (ntainerd)[1990]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:02:19.367593 jq[1971]: false Jul 10 00:02:19.368832 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:02:19.370500 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:02:19.446632 extend-filesystems[1972]: Found /dev/nvme0n1p6 Jul 10 00:02:19.441593 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:02:19.458272 jq[1983]: true Jul 10 00:02:19.490785 ntpd[1974]: ntpd 4.2.8p17@1.4004-o Wed Jul 9 21:34:42 UTC 2025 (1): Starting Jul 10 00:02:19.497516 tar[1985]: linux-arm64/LICENSE Jul 10 00:02:19.497516 tar[1985]: linux-arm64/helm Jul 10 00:02:19.497939 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: ntpd 4.2.8p17@1.4004-o Wed Jul 9 21:34:42 UTC 2025 (1): Starting Jul 10 00:02:19.497939 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 10 00:02:19.497939 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: ---------------------------------------------------- Jul 10 00:02:19.497939 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: ntp-4 is maintained by Network Time Foundation, Jul 10 00:02:19.497939 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 10 00:02:19.497939 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: corporation. Support and training for ntp-4 are Jul 10 00:02:19.497939 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: available at https://www.nwtime.org/support Jul 10 00:02:19.497939 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: ---------------------------------------------------- Jul 10 00:02:19.490849 ntpd[1974]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 10 00:02:19.490869 ntpd[1974]: ---------------------------------------------------- Jul 10 00:02:19.490887 ntpd[1974]: ntp-4 is maintained by Network Time Foundation, Jul 10 00:02:19.490905 ntpd[1974]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 10 00:02:19.490922 ntpd[1974]: corporation. Support and training for ntp-4 are Jul 10 00:02:19.490939 ntpd[1974]: available at https://www.nwtime.org/support Jul 10 00:02:19.490957 ntpd[1974]: ---------------------------------------------------- Jul 10 00:02:19.513572 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:02:19.521240 extend-filesystems[1972]: Found /dev/nvme0n1p9 Jul 10 00:02:19.542300 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: proto: precision = 0.096 usec (-23) Jul 10 00:02:19.538637 ntpd[1974]: proto: precision = 0.096 usec (-23) Jul 10 00:02:19.534655 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:02:19.549551 extend-filesystems[1972]: Checking size of /dev/nvme0n1p9 Jul 10 00:02:19.557552 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: basedate set to 2025-06-27 Jul 10 00:02:19.557552 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: gps base set to 2025-06-29 (week 2373) Jul 10 00:02:19.546458 ntpd[1974]: basedate set to 2025-06-27 Jul 10 00:02:19.546491 ntpd[1974]: gps base set to 2025-06-29 (week 2373) Jul 10 00:02:19.569151 ntpd[1974]: Listen and drop on 0 v6wildcard [::]:123 Jul 10 00:02:19.570928 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:02:19.576229 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: Listen and drop on 0 v6wildcard [::]:123 Jul 10 00:02:19.576229 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 10 00:02:19.576229 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: Listen normally on 2 lo 127.0.0.1:123 Jul 10 00:02:19.576229 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: Listen normally on 3 eth0 172.31.25.230:123 Jul 10 00:02:19.576229 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: Listen normally on 4 lo [::1]:123 Jul 10 00:02:19.576229 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: bind(21) AF_INET6 fe80::447:bdff:feda:503d%2#123 flags 0x11 failed: Cannot assign requested address Jul 10 00:02:19.576229 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: unable to create socket on eth0 (5) for fe80::447:bdff:feda:503d%2#123 Jul 10 00:02:19.576229 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: failed to init interface for address fe80::447:bdff:feda:503d%2 Jul 10 00:02:19.576229 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: Listening on routing socket on fd #21 for interface updates Jul 10 00:02:19.569239 ntpd[1974]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 10 00:02:19.569527 ntpd[1974]: Listen normally on 2 lo 127.0.0.1:123 Jul 10 00:02:19.569586 ntpd[1974]: Listen normally on 3 eth0 172.31.25.230:123 Jul 10 00:02:19.569648 ntpd[1974]: Listen normally on 4 lo [::1]:123 Jul 10 00:02:19.569716 ntpd[1974]: bind(21) AF_INET6 fe80::447:bdff:feda:503d%2#123 flags 0x11 failed: Cannot assign requested address Jul 10 00:02:19.569753 ntpd[1974]: unable to create socket on eth0 (5) for fe80::447:bdff:feda:503d%2#123 Jul 10 00:02:19.569778 ntpd[1974]: failed to init interface for address fe80::447:bdff:feda:503d%2 Jul 10 00:02:19.569826 ntpd[1974]: Listening on routing socket on fd #21 for interface updates Jul 10 00:02:19.570665 dbus-daemon[1969]: [system] SELinux support is enabled Jul 10 00:02:19.580374 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:02:19.580423 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:02:19.583299 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:02:19.583334 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:02:19.620867 jq[2012]: true Jul 10 00:02:19.642107 dbus-daemon[1969]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1885 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 10 00:02:19.669377 ntpd[1974]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 10 00:02:19.675517 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 10 00:02:19.680647 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 10 00:02:19.680647 ntpd[1974]: 10 Jul 00:02:19 ntpd[1974]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 10 00:02:19.669465 ntpd[1974]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 10 00:02:19.689455 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 10 00:02:19.706417 extend-filesystems[1972]: Resized partition /dev/nvme0n1p9 Jul 10 00:02:19.729970 extend-filesystems[2033]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 00:02:19.743041 update_engine[1982]: I20250710 00:02:19.741052 1982 main.cc:92] Flatcar Update Engine starting Jul 10 00:02:19.770543 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 10 00:02:19.775125 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:02:19.785916 update_engine[1982]: I20250710 00:02:19.782552 1982 update_check_scheduler.cc:74] Next update check in 10m36s Jul 10 00:02:19.823491 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:02:19.867666 systemd-logind[1981]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 00:02:19.867724 systemd-logind[1981]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 10 00:02:19.868119 systemd-logind[1981]: New seat seat0. Jul 10 00:02:19.871726 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:02:19.885371 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 10 00:02:19.900391 coreos-metadata[1968]: Jul 10 00:02:19.899 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 10 00:02:19.907165 coreos-metadata[1968]: Jul 10 00:02:19.903 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 10 00:02:19.908190 coreos-metadata[1968]: Jul 10 00:02:19.907 INFO Fetch successful Jul 10 00:02:19.908190 coreos-metadata[1968]: Jul 10 00:02:19.907 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 10 00:02:19.911956 coreos-metadata[1968]: Jul 10 00:02:19.910 INFO Fetch successful Jul 10 00:02:19.911956 coreos-metadata[1968]: Jul 10 00:02:19.910 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 10 00:02:19.912151 bash[2049]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:02:19.912900 extend-filesystems[2033]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 10 00:02:19.912900 extend-filesystems[2033]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:02:19.912900 extend-filesystems[2033]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 10 00:02:19.924332 coreos-metadata[1968]: Jul 10 00:02:19.918 INFO Fetch successful Jul 10 00:02:19.924332 coreos-metadata[1968]: Jul 10 00:02:19.918 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 10 00:02:19.926509 coreos-metadata[1968]: Jul 10 00:02:19.926 INFO Fetch successful Jul 10 00:02:19.926509 coreos-metadata[1968]: Jul 10 00:02:19.926 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 10 00:02:19.927927 coreos-metadata[1968]: Jul 10 00:02:19.927 INFO Fetch failed with 404: resource not found Jul 10 00:02:19.927927 coreos-metadata[1968]: Jul 10 00:02:19.927 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 10 00:02:19.928653 coreos-metadata[1968]: Jul 10 00:02:19.928 INFO Fetch successful Jul 10 00:02:19.928653 coreos-metadata[1968]: Jul 10 00:02:19.928 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 10 00:02:19.932142 coreos-metadata[1968]: Jul 10 00:02:19.931 INFO Fetch successful Jul 10 00:02:19.932142 coreos-metadata[1968]: Jul 10 00:02:19.931 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 10 00:02:19.932847 coreos-metadata[1968]: Jul 10 00:02:19.932 INFO Fetch successful Jul 10 00:02:19.932847 coreos-metadata[1968]: Jul 10 00:02:19.932 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 10 00:02:19.933535 coreos-metadata[1968]: Jul 10 00:02:19.933 INFO Fetch successful Jul 10 00:02:19.933941 coreos-metadata[1968]: Jul 10 00:02:19.933 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 10 00:02:19.937904 coreos-metadata[1968]: Jul 10 00:02:19.935 INFO Fetch successful Jul 10 00:02:19.967044 extend-filesystems[1972]: Resized filesystem in /dev/nvme0n1p9 Jul 10 00:02:19.972171 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:02:19.974948 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:02:19.983004 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:02:19.996932 systemd[1]: Starting sshkeys.service... Jul 10 00:02:20.072476 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 10 00:02:20.076061 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:02:20.107251 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 10 00:02:20.117309 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 10 00:02:20.187654 containerd[1990]: time="2025-07-10T00:02:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 00:02:20.189100 containerd[1990]: time="2025-07-10T00:02:20.189040941Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 00:02:20.243805 locksmithd[2039]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:02:20.311390 containerd[1990]: time="2025-07-10T00:02:20.310475350Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.056µs" Jul 10 00:02:20.311390 containerd[1990]: time="2025-07-10T00:02:20.310535458Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 00:02:20.311390 containerd[1990]: time="2025-07-10T00:02:20.310575442Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 00:02:20.311390 containerd[1990]: time="2025-07-10T00:02:20.310877206Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 00:02:20.311390 containerd[1990]: time="2025-07-10T00:02:20.310918306Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 00:02:20.311390 containerd[1990]: time="2025-07-10T00:02:20.310967842Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:02:20.311390 containerd[1990]: time="2025-07-10T00:02:20.311084902Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:02:20.311390 containerd[1990]: time="2025-07-10T00:02:20.311110378Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:02:20.315378 containerd[1990]: time="2025-07-10T00:02:20.313551598Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:02:20.315378 containerd[1990]: time="2025-07-10T00:02:20.313604782Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:02:20.315378 containerd[1990]: time="2025-07-10T00:02:20.313639882Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:02:20.315378 containerd[1990]: time="2025-07-10T00:02:20.313662682Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 00:02:20.315378 containerd[1990]: time="2025-07-10T00:02:20.313876534Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 00:02:20.315378 containerd[1990]: time="2025-07-10T00:02:20.314281078Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:02:20.324125 containerd[1990]: time="2025-07-10T00:02:20.318389758Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:02:20.324125 containerd[1990]: time="2025-07-10T00:02:20.318446290Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 00:02:20.324125 containerd[1990]: time="2025-07-10T00:02:20.318525514Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 00:02:20.324125 containerd[1990]: time="2025-07-10T00:02:20.323532490Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 00:02:20.324125 containerd[1990]: time="2025-07-10T00:02:20.323723650Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:02:20.335866 coreos-metadata[2066]: Jul 10 00:02:20.335 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 10 00:02:20.338097 coreos-metadata[2066]: Jul 10 00:02:20.337 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 10 00:02:20.339382 coreos-metadata[2066]: Jul 10 00:02:20.339 INFO Fetch successful Jul 10 00:02:20.339548 coreos-metadata[2066]: Jul 10 00:02:20.339 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 10 00:02:20.342410 containerd[1990]: time="2025-07-10T00:02:20.341795194Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 00:02:20.342410 containerd[1990]: time="2025-07-10T00:02:20.341883094Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 00:02:20.342410 containerd[1990]: time="2025-07-10T00:02:20.341934034Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 00:02:20.342410 containerd[1990]: time="2025-07-10T00:02:20.341965402Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 00:02:20.342410 containerd[1990]: time="2025-07-10T00:02:20.341995246Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 00:02:20.342410 containerd[1990]: time="2025-07-10T00:02:20.342022834Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 00:02:20.342410 containerd[1990]: time="2025-07-10T00:02:20.342054442Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 00:02:20.342410 containerd[1990]: time="2025-07-10T00:02:20.342088306Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 00:02:20.342410 containerd[1990]: time="2025-07-10T00:02:20.342123226Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 00:02:20.342410 containerd[1990]: time="2025-07-10T00:02:20.342150250Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 00:02:20.342410 containerd[1990]: time="2025-07-10T00:02:20.342176218Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 00:02:20.342410 containerd[1990]: time="2025-07-10T00:02:20.342208306Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 00:02:20.342973 containerd[1990]: time="2025-07-10T00:02:20.342672442Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 00:02:20.342973 containerd[1990]: time="2025-07-10T00:02:20.342748954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 00:02:20.342973 containerd[1990]: time="2025-07-10T00:02:20.342785218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 00:02:20.342973 containerd[1990]: time="2025-07-10T00:02:20.342841042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 00:02:20.342973 containerd[1990]: time="2025-07-10T00:02:20.342873010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 00:02:20.342973 containerd[1990]: time="2025-07-10T00:02:20.342929998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 00:02:20.343208 containerd[1990]: time="2025-07-10T00:02:20.342959194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 00:02:20.343208 containerd[1990]: time="2025-07-10T00:02:20.343013014Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 00:02:20.343208 containerd[1990]: time="2025-07-10T00:02:20.343043278Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 00:02:20.343208 containerd[1990]: time="2025-07-10T00:02:20.343096666Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 00:02:20.343208 containerd[1990]: time="2025-07-10T00:02:20.343124506Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 00:02:20.345572 containerd[1990]: time="2025-07-10T00:02:20.343579258Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 00:02:20.345572 containerd[1990]: time="2025-07-10T00:02:20.343641538Z" level=info msg="Start snapshots syncer" Jul 10 00:02:20.345572 containerd[1990]: time="2025-07-10T00:02:20.343727218Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 00:02:20.345722 coreos-metadata[2066]: Jul 10 00:02:20.343 INFO Fetch successful Jul 10 00:02:20.346805 containerd[1990]: time="2025-07-10T00:02:20.345727918Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 00:02:20.346805 containerd[1990]: time="2025-07-10T00:02:20.345879622Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 00:02:20.347069 containerd[1990]: time="2025-07-10T00:02:20.346102030Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 00:02:20.352944 containerd[1990]: time="2025-07-10T00:02:20.347973670Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 00:02:20.352944 containerd[1990]: time="2025-07-10T00:02:20.348061426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 00:02:20.352944 containerd[1990]: time="2025-07-10T00:02:20.348091366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 00:02:20.352944 containerd[1990]: time="2025-07-10T00:02:20.348122374Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 00:02:20.352944 containerd[1990]: time="2025-07-10T00:02:20.348152938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 00:02:20.352944 containerd[1990]: time="2025-07-10T00:02:20.348180022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 00:02:20.352944 containerd[1990]: time="2025-07-10T00:02:20.348213418Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 00:02:20.352944 containerd[1990]: time="2025-07-10T00:02:20.348271750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 00:02:20.352944 containerd[1990]: time="2025-07-10T00:02:20.348299926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 00:02:20.352944 containerd[1990]: time="2025-07-10T00:02:20.348889126Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 00:02:20.352944 containerd[1990]: time="2025-07-10T00:02:20.349029874Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:02:20.352944 containerd[1990]: time="2025-07-10T00:02:20.349091014Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:02:20.352944 containerd[1990]: time="2025-07-10T00:02:20.349114174Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:02:20.354060 containerd[1990]: time="2025-07-10T00:02:20.349167586Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:02:20.354060 containerd[1990]: time="2025-07-10T00:02:20.349367422Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 00:02:20.354060 containerd[1990]: time="2025-07-10T00:02:20.349401082Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 00:02:20.354060 containerd[1990]: time="2025-07-10T00:02:20.349428370Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 00:02:20.354060 containerd[1990]: time="2025-07-10T00:02:20.350590018Z" level=info msg="runtime interface created" Jul 10 00:02:20.354060 containerd[1990]: time="2025-07-10T00:02:20.350607826Z" level=info msg="created NRI interface" Jul 10 00:02:20.354060 containerd[1990]: time="2025-07-10T00:02:20.350631418Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 00:02:20.354060 containerd[1990]: time="2025-07-10T00:02:20.350691946Z" level=info msg="Connect containerd service" Jul 10 00:02:20.354060 containerd[1990]: time="2025-07-10T00:02:20.350789074Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:02:20.354666 unknown[2066]: wrote ssh authorized keys file for user: core Jul 10 00:02:20.359607 containerd[1990]: time="2025-07-10T00:02:20.356984230Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:02:20.495495 update-ssh-keys[2141]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:02:20.499433 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 10 00:02:20.508108 systemd[1]: Finished sshkeys.service. Jul 10 00:02:20.510043 ntpd[1974]: bind(24) AF_INET6 fe80::447:bdff:feda:503d%2#123 flags 0x11 failed: Cannot assign requested address Jul 10 00:02:20.513952 ntpd[1974]: 10 Jul 00:02:20 ntpd[1974]: bind(24) AF_INET6 fe80::447:bdff:feda:503d%2#123 flags 0x11 failed: Cannot assign requested address Jul 10 00:02:20.513952 ntpd[1974]: 10 Jul 00:02:20 ntpd[1974]: unable to create socket on eth0 (6) for fe80::447:bdff:feda:503d%2#123 Jul 10 00:02:20.513952 ntpd[1974]: 10 Jul 00:02:20 ntpd[1974]: failed to init interface for address fe80::447:bdff:feda:503d%2 Jul 10 00:02:20.510105 ntpd[1974]: unable to create socket on eth0 (6) for fe80::447:bdff:feda:503d%2#123 Jul 10 00:02:20.510133 ntpd[1974]: failed to init interface for address fe80::447:bdff:feda:503d%2 Jul 10 00:02:20.534589 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 10 00:02:20.555893 dbus-daemon[1969]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 10 00:02:20.568714 dbus-daemon[1969]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2030 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 10 00:02:20.577055 systemd[1]: Starting polkit.service - Authorization Manager... Jul 10 00:02:20.712844 containerd[1990]: time="2025-07-10T00:02:20.703101336Z" level=info msg="Start subscribing containerd event" Jul 10 00:02:20.712844 containerd[1990]: time="2025-07-10T00:02:20.703183848Z" level=info msg="Start recovering state" Jul 10 00:02:20.712844 containerd[1990]: time="2025-07-10T00:02:20.707515596Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:02:20.712844 containerd[1990]: time="2025-07-10T00:02:20.708784968Z" level=info msg="Start event monitor" Jul 10 00:02:20.712844 containerd[1990]: time="2025-07-10T00:02:20.708858516Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:02:20.712844 containerd[1990]: time="2025-07-10T00:02:20.708879360Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:02:20.712844 containerd[1990]: time="2025-07-10T00:02:20.708882168Z" level=info msg="Start streaming server" Jul 10 00:02:20.712844 containerd[1990]: time="2025-07-10T00:02:20.708963120Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 00:02:20.712844 containerd[1990]: time="2025-07-10T00:02:20.708982536Z" level=info msg="runtime interface starting up..." Jul 10 00:02:20.712844 containerd[1990]: time="2025-07-10T00:02:20.708998292Z" level=info msg="starting plugins..." Jul 10 00:02:20.712844 containerd[1990]: time="2025-07-10T00:02:20.709033440Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 00:02:20.712844 containerd[1990]: time="2025-07-10T00:02:20.709257624Z" level=info msg="containerd successfully booted in 0.526368s" Jul 10 00:02:20.718583 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:02:20.883697 systemd-networkd[1885]: eth0: Gained IPv6LL Jul 10 00:02:20.895367 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:02:20.898787 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:02:20.915809 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 10 00:02:20.923840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:02:20.932600 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:02:21.079856 polkitd[2170]: Started polkitd version 126 Jul 10 00:02:21.098706 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:02:21.116012 polkitd[2170]: Loading rules from directory /etc/polkit-1/rules.d Jul 10 00:02:21.121570 polkitd[2170]: Loading rules from directory /run/polkit-1/rules.d Jul 10 00:02:21.122485 polkitd[2170]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 10 00:02:21.123149 polkitd[2170]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 10 00:02:21.123198 polkitd[2170]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 10 00:02:21.123278 polkitd[2170]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 10 00:02:21.129515 polkitd[2170]: Finished loading, compiling and executing 2 rules Jul 10 00:02:21.129899 systemd[1]: Started polkit.service - Authorization Manager. Jul 10 00:02:21.141985 amazon-ssm-agent[2184]: Initializing new seelog logger Jul 10 00:02:21.142335 dbus-daemon[1969]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 10 00:02:21.144970 amazon-ssm-agent[2184]: New Seelog Logger Creation Complete Jul 10 00:02:21.144970 amazon-ssm-agent[2184]: 2025/07/10 00:02:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:02:21.144970 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:02:21.145504 polkitd[2170]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 10 00:02:21.147085 amazon-ssm-agent[2184]: 2025/07/10 00:02:21 processing appconfig overrides Jul 10 00:02:21.148487 amazon-ssm-agent[2184]: 2025/07/10 00:02:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:02:21.151362 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:02:21.151362 amazon-ssm-agent[2184]: 2025/07/10 00:02:21 processing appconfig overrides Jul 10 00:02:21.151362 amazon-ssm-agent[2184]: 2025/07/10 00:02:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:02:21.151362 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:02:21.151362 amazon-ssm-agent[2184]: 2025/07/10 00:02:21 processing appconfig overrides Jul 10 00:02:21.151362 amazon-ssm-agent[2184]: 2025-07-10 00:02:21.1476 INFO Proxy environment variables: Jul 10 00:02:21.155615 amazon-ssm-agent[2184]: 2025/07/10 00:02:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:02:21.156401 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:02:21.156652 amazon-ssm-agent[2184]: 2025/07/10 00:02:21 processing appconfig overrides Jul 10 00:02:21.200818 systemd-hostnamed[2030]: Hostname set to (transient) Jul 10 00:02:21.203445 systemd-resolved[1889]: System hostname changed to 'ip-172-31-25-230'. Jul 10 00:02:21.251369 amazon-ssm-agent[2184]: 2025-07-10 00:02:21.1484 INFO https_proxy: Jul 10 00:02:21.350435 amazon-ssm-agent[2184]: 2025-07-10 00:02:21.1484 INFO http_proxy: Jul 10 00:02:21.450698 amazon-ssm-agent[2184]: 2025-07-10 00:02:21.1484 INFO no_proxy: Jul 10 00:02:21.551360 amazon-ssm-agent[2184]: 2025-07-10 00:02:21.1487 INFO Checking if agent identity type OnPrem can be assumed Jul 10 00:02:21.650815 amazon-ssm-agent[2184]: 2025-07-10 00:02:21.1488 INFO Checking if agent identity type EC2 can be assumed Jul 10 00:02:21.709637 sshd_keygen[2014]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:02:21.751685 amazon-ssm-agent[2184]: 2025-07-10 00:02:21.3463 INFO Agent will take identity from EC2 Jul 10 00:02:21.799463 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:02:21.808136 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:02:21.814773 systemd[1]: Started sshd@0-172.31.25.230:22-139.178.89.65:33122.service - OpenSSH per-connection server daemon (139.178.89.65:33122). Jul 10 00:02:21.851364 amazon-ssm-agent[2184]: 2025-07-10 00:02:21.3528 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 10 00:02:21.871614 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:02:21.874971 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:02:21.888995 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:02:21.935751 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:02:21.943291 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:02:21.948680 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 00:02:21.953877 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:02:21.957511 amazon-ssm-agent[2184]: 2025-07-10 00:02:21.3528 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 10 00:02:21.983116 tar[1985]: linux-arm64/README.md Jul 10 00:02:22.018485 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:02:22.060145 amazon-ssm-agent[2184]: 2025-07-10 00:02:21.3529 INFO [amazon-ssm-agent] Starting Core Agent Jul 10 00:02:22.109458 sshd[2220]: Accepted publickey for core from 139.178.89.65 port 33122 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:02:22.115084 sshd-session[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:02:22.138099 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:02:22.143732 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:02:22.157445 amazon-ssm-agent[2184]: 2025-07-10 00:02:21.3529 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 10 00:02:22.175509 systemd-logind[1981]: New session 1 of user core. Jul 10 00:02:22.198716 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:02:22.211510 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:02:22.242589 (systemd)[2235]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:02:22.249705 systemd-logind[1981]: New session c1 of user core. Jul 10 00:02:22.258440 amazon-ssm-agent[2184]: 2025-07-10 00:02:21.3529 INFO [Registrar] Starting registrar module Jul 10 00:02:22.358761 amazon-ssm-agent[2184]: 2025-07-10 00:02:21.3581 INFO [EC2Identity] Checking disk for registration info Jul 10 00:02:22.461421 amazon-ssm-agent[2184]: 2025-07-10 00:02:21.3582 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 10 00:02:22.560007 amazon-ssm-agent[2184]: 2025-07-10 00:02:21.3582 INFO [EC2Identity] Generating registration keypair Jul 10 00:02:22.600542 systemd[2235]: Queued start job for default target default.target. Jul 10 00:02:22.607424 systemd[2235]: Created slice app.slice - User Application Slice. Jul 10 00:02:22.607488 systemd[2235]: Reached target paths.target - Paths. Jul 10 00:02:22.607576 systemd[2235]: Reached target timers.target - Timers. Jul 10 00:02:22.610006 systemd[2235]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:02:22.638834 systemd[2235]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:02:22.639650 systemd[2235]: Reached target sockets.target - Sockets. Jul 10 00:02:22.639762 systemd[2235]: Reached target basic.target - Basic System. Jul 10 00:02:22.639854 systemd[2235]: Reached target default.target - Main User Target. Jul 10 00:02:22.639913 systemd[2235]: Startup finished in 364ms. Jul 10 00:02:22.640180 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:02:22.658913 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:02:22.827510 systemd[1]: Started sshd@1-172.31.25.230:22-139.178.89.65:39736.service - OpenSSH per-connection server daemon (139.178.89.65:39736). Jul 10 00:02:23.071428 sshd[2246]: Accepted publickey for core from 139.178.89.65 port 39736 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:02:23.075055 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:02:23.085527 systemd-logind[1981]: New session 2 of user core. Jul 10 00:02:23.094614 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:02:23.166740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:02:23.169889 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:02:23.173567 systemd[1]: Startup finished in 3.731s (kernel) + 9.351s (initrd) + 9.205s (userspace) = 22.289s. Jul 10 00:02:23.202124 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:02:23.251786 sshd[2248]: Connection closed by 139.178.89.65 port 39736 Jul 10 00:02:23.252565 sshd-session[2246]: pam_unix(sshd:session): session closed for user core Jul 10 00:02:23.263563 systemd[1]: sshd@1-172.31.25.230:22-139.178.89.65:39736.service: Deactivated successfully. Jul 10 00:02:23.270947 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:02:23.277559 systemd-logind[1981]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:02:23.300934 systemd[1]: Started sshd@2-172.31.25.230:22-139.178.89.65:39752.service - OpenSSH per-connection server daemon (139.178.89.65:39752). Jul 10 00:02:23.305957 systemd-logind[1981]: Removed session 2. Jul 10 00:02:23.452081 amazon-ssm-agent[2184]: 2025-07-10 00:02:23.4515 INFO [EC2Identity] Checking write access before registering Jul 10 00:02:23.501024 amazon-ssm-agent[2184]: 2025/07/10 00:02:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:02:23.501868 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 10 00:02:23.502055 amazon-ssm-agent[2184]: 2025/07/10 00:02:23 processing appconfig overrides Jul 10 00:02:23.509992 ntpd[1974]: Listen normally on 7 eth0 [fe80::447:bdff:feda:503d%2]:123 Jul 10 00:02:23.510545 ntpd[1974]: 10 Jul 00:02:23 ntpd[1974]: Listen normally on 7 eth0 [fe80::447:bdff:feda:503d%2]:123 Jul 10 00:02:23.535292 amazon-ssm-agent[2184]: 2025-07-10 00:02:23.4545 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 10 00:02:23.535292 amazon-ssm-agent[2184]: 2025-07-10 00:02:23.5006 INFO [EC2Identity] EC2 registration was successful. Jul 10 00:02:23.535546 amazon-ssm-agent[2184]: 2025-07-10 00:02:23.5007 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 10 00:02:23.535546 amazon-ssm-agent[2184]: 2025-07-10 00:02:23.5008 INFO [CredentialRefresher] credentialRefresher has started Jul 10 00:02:23.536335 amazon-ssm-agent[2184]: 2025-07-10 00:02:23.5008 INFO [CredentialRefresher] Starting credentials refresher loop Jul 10 00:02:23.536335 amazon-ssm-agent[2184]: 2025-07-10 00:02:23.5349 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 10 00:02:23.536484 amazon-ssm-agent[2184]: 2025-07-10 00:02:23.5352 INFO [CredentialRefresher] Credentials ready Jul 10 00:02:23.536603 sshd[2264]: Accepted publickey for core from 139.178.89.65 port 39752 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:02:23.538267 sshd-session[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:02:23.546456 systemd-logind[1981]: New session 3 of user core. Jul 10 00:02:23.553104 amazon-ssm-agent[2184]: 2025-07-10 00:02:23.5364 INFO [CredentialRefresher] Next credential rotation will be in 29.999975579 minutes Jul 10 00:02:23.554610 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:02:23.677382 sshd[2270]: Connection closed by 139.178.89.65 port 39752 Jul 10 00:02:23.677537 sshd-session[2264]: pam_unix(sshd:session): session closed for user core Jul 10 00:02:23.684989 systemd[1]: sshd@2-172.31.25.230:22-139.178.89.65:39752.service: Deactivated successfully. Jul 10 00:02:23.688405 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:02:23.690582 systemd-logind[1981]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:02:23.694061 systemd-logind[1981]: Removed session 3. Jul 10 00:02:23.712826 systemd[1]: Started sshd@3-172.31.25.230:22-139.178.89.65:39756.service - OpenSSH per-connection server daemon (139.178.89.65:39756). Jul 10 00:02:23.941449 sshd[2276]: Accepted publickey for core from 139.178.89.65 port 39756 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:02:23.943950 sshd-session[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:02:23.954437 systemd-logind[1981]: New session 4 of user core. Jul 10 00:02:23.967651 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:02:24.095041 sshd[2279]: Connection closed by 139.178.89.65 port 39756 Jul 10 00:02:24.094844 sshd-session[2276]: pam_unix(sshd:session): session closed for user core Jul 10 00:02:24.105384 systemd[1]: sshd@3-172.31.25.230:22-139.178.89.65:39756.service: Deactivated successfully. Jul 10 00:02:24.108442 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:02:24.112114 systemd-logind[1981]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:02:24.115970 systemd-logind[1981]: Removed session 4. Jul 10 00:02:24.133394 systemd[1]: Started sshd@4-172.31.25.230:22-139.178.89.65:39762.service - OpenSSH per-connection server daemon (139.178.89.65:39762). Jul 10 00:02:24.320869 kubelet[2254]: E0710 00:02:24.320731 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:02:24.325498 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:02:24.325818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:02:24.326488 systemd[1]: kubelet.service: Consumed 1.434s CPU time, 256.8M memory peak. Jul 10 00:02:24.341525 sshd[2285]: Accepted publickey for core from 139.178.89.65 port 39762 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:02:24.344102 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:02:24.352848 systemd-logind[1981]: New session 5 of user core. Jul 10 00:02:24.360619 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:02:24.478199 sudo[2289]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:02:24.479280 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:02:24.498303 sudo[2289]: pam_unix(sudo:session): session closed for user root Jul 10 00:02:24.521884 sshd[2288]: Connection closed by 139.178.89.65 port 39762 Jul 10 00:02:24.522909 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Jul 10 00:02:24.530327 systemd[1]: sshd@4-172.31.25.230:22-139.178.89.65:39762.service: Deactivated successfully. Jul 10 00:02:24.534192 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:02:24.535888 systemd-logind[1981]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:02:24.539769 systemd-logind[1981]: Removed session 5. Jul 10 00:02:24.558503 systemd[1]: Started sshd@5-172.31.25.230:22-139.178.89.65:39772.service - OpenSSH per-connection server daemon (139.178.89.65:39772). Jul 10 00:02:24.566757 amazon-ssm-agent[2184]: 2025-07-10 00:02:24.5666 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 10 00:02:24.668632 amazon-ssm-agent[2184]: 2025-07-10 00:02:24.5698 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2298) started Jul 10 00:02:24.761751 sshd[2297]: Accepted publickey for core from 139.178.89.65 port 39772 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:02:24.765173 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:02:24.769574 amazon-ssm-agent[2184]: 2025-07-10 00:02:24.5698 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 10 00:02:24.775457 systemd-logind[1981]: New session 6 of user core. Jul 10 00:02:24.781626 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:02:24.885967 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:02:24.886600 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:02:24.898786 sudo[2312]: pam_unix(sudo:session): session closed for user root Jul 10 00:02:24.908048 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 00:02:24.909153 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:02:24.926041 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:02:24.994964 augenrules[2334]: No rules Jul 10 00:02:24.997487 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:02:24.998017 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:02:25.000193 sudo[2311]: pam_unix(sudo:session): session closed for user root Jul 10 00:02:25.023925 sshd[2307]: Connection closed by 139.178.89.65 port 39772 Jul 10 00:02:25.024881 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Jul 10 00:02:25.030855 systemd[1]: sshd@5-172.31.25.230:22-139.178.89.65:39772.service: Deactivated successfully. Jul 10 00:02:25.035457 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:02:25.037612 systemd-logind[1981]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:02:25.040778 systemd-logind[1981]: Removed session 6. Jul 10 00:02:25.065608 systemd[1]: Started sshd@6-172.31.25.230:22-139.178.89.65:39782.service - OpenSSH per-connection server daemon (139.178.89.65:39782). Jul 10 00:02:25.266919 sshd[2343]: Accepted publickey for core from 139.178.89.65 port 39782 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:02:25.269484 sshd-session[2343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:02:25.277532 systemd-logind[1981]: New session 7 of user core. Jul 10 00:02:25.296612 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:02:25.398709 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:02:25.399311 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:02:25.983704 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:02:25.999099 (dockerd)[2364]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:02:26.403750 dockerd[2364]: time="2025-07-10T00:02:26.403578160Z" level=info msg="Starting up" Jul 10 00:02:26.406834 dockerd[2364]: time="2025-07-10T00:02:26.406766596Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 00:02:26.453786 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3198551516-merged.mount: Deactivated successfully. Jul 10 00:02:26.235161 systemd-resolved[1889]: Clock change detected. Flushing caches. Jul 10 00:02:26.250500 systemd-journald[1523]: Time jumped backwards, rotating. Jul 10 00:02:26.377991 dockerd[2364]: time="2025-07-10T00:02:26.377914756Z" level=info msg="Loading containers: start." Jul 10 00:02:26.392442 kernel: Initializing XFRM netlink socket Jul 10 00:02:26.699364 (udev-worker)[2390]: Network interface NamePolicy= disabled on kernel command line. Jul 10 00:02:26.777978 systemd-networkd[1885]: docker0: Link UP Jul 10 00:02:26.783463 dockerd[2364]: time="2025-07-10T00:02:26.782562030Z" level=info msg="Loading containers: done." Jul 10 00:02:26.806997 dockerd[2364]: time="2025-07-10T00:02:26.806937330Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:02:26.807244 dockerd[2364]: time="2025-07-10T00:02:26.807215670Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 00:02:26.807523 dockerd[2364]: time="2025-07-10T00:02:26.807494670Z" level=info msg="Initializing buildkit" Jul 10 00:02:26.844061 dockerd[2364]: time="2025-07-10T00:02:26.844010995Z" level=info msg="Completed buildkit initialization" Jul 10 00:02:26.858505 dockerd[2364]: time="2025-07-10T00:02:26.858419203Z" level=info msg="Daemon has completed initialization" Jul 10 00:02:26.859247 dockerd[2364]: time="2025-07-10T00:02:26.858936619Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:02:26.859121 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:02:27.980137 containerd[1990]: time="2025-07-10T00:02:27.979987388Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 10 00:02:28.684072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3767547819.mount: Deactivated successfully. Jul 10 00:02:30.108983 containerd[1990]: time="2025-07-10T00:02:30.108926647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:30.112473 containerd[1990]: time="2025-07-10T00:02:30.112375315Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328194" Jul 10 00:02:30.112848 containerd[1990]: time="2025-07-10T00:02:30.112809871Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:30.122424 containerd[1990]: time="2025-07-10T00:02:30.121783339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:30.123080 containerd[1990]: time="2025-07-10T00:02:30.123036847Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 2.142183719s" Jul 10 00:02:30.123212 containerd[1990]: time="2025-07-10T00:02:30.123184783Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 10 00:02:30.124435 containerd[1990]: time="2025-07-10T00:02:30.124230931Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 10 00:02:31.880412 containerd[1990]: time="2025-07-10T00:02:31.880324740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:31.882992 containerd[1990]: time="2025-07-10T00:02:31.882923124Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529228" Jul 10 00:02:31.885532 containerd[1990]: time="2025-07-10T00:02:31.885459624Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:31.890703 containerd[1990]: time="2025-07-10T00:02:31.890621976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:31.892571 containerd[1990]: time="2025-07-10T00:02:31.892359552Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.767884973s" Jul 10 00:02:31.892571 containerd[1990]: time="2025-07-10T00:02:31.892425276Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 10 00:02:31.893518 containerd[1990]: time="2025-07-10T00:02:31.893479188Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 10 00:02:33.310486 containerd[1990]: time="2025-07-10T00:02:33.310381415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:33.312471 containerd[1990]: time="2025-07-10T00:02:33.312376055Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484141" Jul 10 00:02:33.314531 containerd[1990]: time="2025-07-10T00:02:33.314451443Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:33.319215 containerd[1990]: time="2025-07-10T00:02:33.319139591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:33.322511 containerd[1990]: time="2025-07-10T00:02:33.321805907Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.428117547s" Jul 10 00:02:33.322511 containerd[1990]: time="2025-07-10T00:02:33.322330163Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 10 00:02:33.324930 containerd[1990]: time="2025-07-10T00:02:33.324702563Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 10 00:02:34.302157 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:02:34.307561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:02:34.694681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:02:34.709921 (kubelet)[2646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:02:34.814279 kubelet[2646]: E0710 00:02:34.814200 2646 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:02:34.823438 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:02:34.823779 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:02:34.825555 systemd[1]: kubelet.service: Consumed 340ms CPU time, 105.8M memory peak. Jul 10 00:02:34.846926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1120206506.mount: Deactivated successfully. Jul 10 00:02:35.368758 containerd[1990]: time="2025-07-10T00:02:35.368702353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:35.370448 containerd[1990]: time="2025-07-10T00:02:35.370321981Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378406" Jul 10 00:02:35.370837 containerd[1990]: time="2025-07-10T00:02:35.370771141Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:35.373744 containerd[1990]: time="2025-07-10T00:02:35.373662589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:35.375454 containerd[1990]: time="2025-07-10T00:02:35.374960569Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 2.050185478s" Jul 10 00:02:35.375454 containerd[1990]: time="2025-07-10T00:02:35.375015073Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 10 00:02:35.375629 containerd[1990]: time="2025-07-10T00:02:35.375594349Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:02:36.058107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1011260529.mount: Deactivated successfully. Jul 10 00:02:37.241730 containerd[1990]: time="2025-07-10T00:02:37.241643078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:37.244501 containerd[1990]: time="2025-07-10T00:02:37.244354670Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 10 00:02:37.251420 containerd[1990]: time="2025-07-10T00:02:37.250697162Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:37.257151 containerd[1990]: time="2025-07-10T00:02:37.257078894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:37.259267 containerd[1990]: time="2025-07-10T00:02:37.259217510Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.883576025s" Jul 10 00:02:37.259472 containerd[1990]: time="2025-07-10T00:02:37.259440674Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 10 00:02:37.260473 containerd[1990]: time="2025-07-10T00:02:37.260432486Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:02:37.853085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866119482.mount: Deactivated successfully. Jul 10 00:02:37.865521 containerd[1990]: time="2025-07-10T00:02:37.865444901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:02:37.867253 containerd[1990]: time="2025-07-10T00:02:37.867183821Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 10 00:02:37.869836 containerd[1990]: time="2025-07-10T00:02:37.869750609Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:02:37.874304 containerd[1990]: time="2025-07-10T00:02:37.874227893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:02:37.875694 containerd[1990]: time="2025-07-10T00:02:37.875489945Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 614.865639ms" Jul 10 00:02:37.875694 containerd[1990]: time="2025-07-10T00:02:37.875544281Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 00:02:37.876182 containerd[1990]: time="2025-07-10T00:02:37.876144665Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 10 00:02:38.564940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2160637006.mount: Deactivated successfully. Jul 10 00:02:41.743136 containerd[1990]: time="2025-07-10T00:02:41.743049261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:41.748285 containerd[1990]: time="2025-07-10T00:02:41.748218981Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jul 10 00:02:41.757411 containerd[1990]: time="2025-07-10T00:02:41.757057737Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:41.765737 containerd[1990]: time="2025-07-10T00:02:41.765687513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:02:41.769603 containerd[1990]: time="2025-07-10T00:02:41.768882009Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.892598888s" Jul 10 00:02:41.769603 containerd[1990]: time="2025-07-10T00:02:41.768951561Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 10 00:02:45.074456 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:02:45.079708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:02:45.418637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:02:45.430302 (kubelet)[2795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:02:45.517910 kubelet[2795]: E0710 00:02:45.517845 2795 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:02:45.522711 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:02:45.523165 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:02:45.523922 systemd[1]: kubelet.service: Consumed 300ms CPU time, 107M memory peak. Jul 10 00:02:48.979713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:02:48.980116 systemd[1]: kubelet.service: Consumed 300ms CPU time, 107M memory peak. Jul 10 00:02:48.984648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:02:49.036144 systemd[1]: Reload requested from client PID 2809 ('systemctl') (unit session-7.scope)... Jul 10 00:02:49.036185 systemd[1]: Reloading... Jul 10 00:02:49.275427 zram_generator::config[2860]: No configuration found. Jul 10 00:02:49.472090 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:02:49.732945 systemd[1]: Reloading finished in 696 ms. Jul 10 00:02:49.825809 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:02:49.832027 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:02:49.832561 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:02:49.832639 systemd[1]: kubelet.service: Consumed 230ms CPU time, 95M memory peak. Jul 10 00:02:49.836284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:02:50.163290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:02:50.182936 (kubelet)[2919]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:02:50.255369 kubelet[2919]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:02:50.255369 kubelet[2919]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:02:50.255369 kubelet[2919]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:02:50.255885 kubelet[2919]: I0710 00:02:50.255483 2919 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:02:50.963501 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 10 00:02:51.137004 kubelet[2919]: I0710 00:02:51.136950 2919 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 00:02:51.137223 kubelet[2919]: I0710 00:02:51.137202 2919 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:02:51.138473 kubelet[2919]: I0710 00:02:51.138424 2919 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 00:02:51.204557 kubelet[2919]: E0710 00:02:51.204504 2919 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.25.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.230:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:02:51.211495 kubelet[2919]: I0710 00:02:51.211348 2919 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:02:51.223266 kubelet[2919]: I0710 00:02:51.222612 2919 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:02:51.230567 kubelet[2919]: I0710 00:02:51.230507 2919 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:02:51.233104 kubelet[2919]: I0710 00:02:51.232157 2919 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:02:51.233104 kubelet[2919]: I0710 00:02:51.232207 2919 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-230","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:02:51.233104 kubelet[2919]: I0710 00:02:51.232674 2919 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:02:51.233104 kubelet[2919]: I0710 00:02:51.232695 2919 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 00:02:51.233485 kubelet[2919]: I0710 00:02:51.233030 2919 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:02:51.239541 kubelet[2919]: I0710 00:02:51.239381 2919 kubelet.go:446] "Attempting to sync node with API server" Jul 10 00:02:51.239710 kubelet[2919]: I0710 00:02:51.239691 2919 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:02:51.239846 kubelet[2919]: I0710 00:02:51.239828 2919 kubelet.go:352] "Adding apiserver pod source" Jul 10 00:02:51.240468 kubelet[2919]: I0710 00:02:51.240447 2919 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:02:51.247124 kubelet[2919]: W0710 00:02:51.246343 2919 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-230&limit=500&resourceVersion=0": dial tcp 172.31.25.230:6443: connect: connection refused Jul 10 00:02:51.247351 kubelet[2919]: E0710 00:02:51.247315 2919 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-230&limit=500&resourceVersion=0\": dial tcp 172.31.25.230:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:02:51.250426 kubelet[2919]: I0710 00:02:51.249536 2919 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:02:51.250825 kubelet[2919]: I0710 00:02:51.250799 2919 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:02:51.251117 kubelet[2919]: W0710 00:02:51.251097 2919 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:02:51.252769 kubelet[2919]: I0710 00:02:51.252735 2919 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:02:51.252949 kubelet[2919]: I0710 00:02:51.252931 2919 server.go:1287] "Started kubelet" Jul 10 00:02:51.259609 kubelet[2919]: I0710 00:02:51.259569 2919 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:02:51.269976 kubelet[2919]: W0710 00:02:51.269870 2919 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.230:6443: connect: connection refused Jul 10 00:02:51.270090 kubelet[2919]: E0710 00:02:51.269991 2919 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.230:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:02:51.273305 kubelet[2919]: I0710 00:02:51.273250 2919 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:02:51.275546 kubelet[2919]: I0710 00:02:51.274485 2919 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:02:51.275546 kubelet[2919]: I0710 00:02:51.274898 2919 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:02:51.275546 kubelet[2919]: I0710 00:02:51.275020 2919 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:02:51.275824 kubelet[2919]: E0710 00:02:51.275797 2919 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-25-230\" not found" Jul 10 00:02:51.285351 kubelet[2919]: I0710 00:02:51.285306 2919 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:02:51.285587 kubelet[2919]: I0710 00:02:51.285536 2919 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:02:51.285587 kubelet[2919]: I0710 00:02:51.285441 2919 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:02:51.286169 kubelet[2919]: I0710 00:02:51.285332 2919 server.go:479] "Adding debug handlers to kubelet server" Jul 10 00:02:51.289026 kubelet[2919]: E0710 00:02:51.288961 2919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-230?timeout=10s\": dial tcp 172.31.25.230:6443: connect: connection refused" interval="200ms" Jul 10 00:02:51.289887 kubelet[2919]: E0710 00:02:51.289415 2919 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.230:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.230:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-230.1850baea7870d084 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-230,UID:ip-172-31-25-230,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-230,},FirstTimestamp:2025-07-10 00:02:51.252895876 +0000 UTC m=+1.064266099,LastTimestamp:2025-07-10 00:02:51.252895876 +0000 UTC m=+1.064266099,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-230,}" Jul 10 00:02:51.291980 kubelet[2919]: W0710 00:02:51.291910 2919 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.230:6443: connect: connection refused Jul 10 00:02:51.293210 kubelet[2919]: E0710 00:02:51.293141 2919 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.230:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:02:51.293927 kubelet[2919]: E0710 00:02:51.293869 2919 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:02:51.295999 kubelet[2919]: I0710 00:02:51.295941 2919 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:02:51.295999 kubelet[2919]: I0710 00:02:51.295989 2919 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:02:51.296198 kubelet[2919]: I0710 00:02:51.296155 2919 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:02:51.317279 kubelet[2919]: I0710 00:02:51.317220 2919 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:02:51.320770 kubelet[2919]: I0710 00:02:51.320709 2919 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:02:51.320770 kubelet[2919]: I0710 00:02:51.320759 2919 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 00:02:51.321159 kubelet[2919]: I0710 00:02:51.320792 2919 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:02:51.321159 kubelet[2919]: I0710 00:02:51.320819 2919 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 00:02:51.321159 kubelet[2919]: E0710 00:02:51.320884 2919 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:02:51.328816 kubelet[2919]: W0710 00:02:51.328488 2919 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.230:6443: connect: connection refused Jul 10 00:02:51.330324 kubelet[2919]: E0710 00:02:51.329100 2919 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.230:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:02:51.330324 kubelet[2919]: I0710 00:02:51.329896 2919 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:02:51.330324 kubelet[2919]: I0710 00:02:51.329919 2919 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:02:51.330324 kubelet[2919]: I0710 00:02:51.329953 2919 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:02:51.336289 kubelet[2919]: I0710 00:02:51.336240 2919 policy_none.go:49] "None policy: Start" Jul 10 00:02:51.336289 kubelet[2919]: I0710 00:02:51.336283 2919 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:02:51.336572 kubelet[2919]: I0710 00:02:51.336309 2919 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:02:51.350020 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:02:51.366665 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:02:51.373729 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:02:51.376317 kubelet[2919]: E0710 00:02:51.376273 2919 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-25-230\" not found" Jul 10 00:02:51.384172 kubelet[2919]: I0710 00:02:51.383518 2919 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:02:51.384172 kubelet[2919]: I0710 00:02:51.383804 2919 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:02:51.384172 kubelet[2919]: I0710 00:02:51.383823 2919 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:02:51.384408 kubelet[2919]: I0710 00:02:51.384348 2919 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:02:51.386082 kubelet[2919]: E0710 00:02:51.385952 2919 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:02:51.386082 kubelet[2919]: E0710 00:02:51.386063 2919 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-230\" not found" Jul 10 00:02:51.442812 systemd[1]: Created slice kubepods-burstable-pod71a124b1834aaea58bdd7693283c98e1.slice - libcontainer container kubepods-burstable-pod71a124b1834aaea58bdd7693283c98e1.slice. Jul 10 00:02:51.455428 kubelet[2919]: E0710 00:02:51.455295 2919 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-230\" not found" node="ip-172-31-25-230" Jul 10 00:02:51.459240 systemd[1]: Created slice kubepods-burstable-pod36db4881402c1e747565d7206eacac72.slice - libcontainer container kubepods-burstable-pod36db4881402c1e747565d7206eacac72.slice. Jul 10 00:02:51.466768 kubelet[2919]: E0710 00:02:51.466697 2919 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-230\" not found" node="ip-172-31-25-230" Jul 10 00:02:51.470006 systemd[1]: Created slice kubepods-burstable-podd79ed0620cc0ad7b2474f5673afd5b00.slice - libcontainer container kubepods-burstable-podd79ed0620cc0ad7b2474f5673afd5b00.slice. Jul 10 00:02:51.473879 kubelet[2919]: E0710 00:02:51.473747 2919 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-230\" not found" node="ip-172-31-25-230" Jul 10 00:02:51.487807 kubelet[2919]: I0710 00:02:51.487745 2919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71a124b1834aaea58bdd7693283c98e1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-230\" (UID: \"71a124b1834aaea58bdd7693283c98e1\") " pod="kube-system/kube-apiserver-ip-172-31-25-230" Jul 10 00:02:51.487920 kubelet[2919]: I0710 00:02:51.487813 2919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/36db4881402c1e747565d7206eacac72-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-230\" (UID: \"36db4881402c1e747565d7206eacac72\") " pod="kube-system/kube-controller-manager-ip-172-31-25-230" Jul 10 00:02:51.487920 kubelet[2919]: I0710 00:02:51.487853 2919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36db4881402c1e747565d7206eacac72-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-230\" (UID: \"36db4881402c1e747565d7206eacac72\") " pod="kube-system/kube-controller-manager-ip-172-31-25-230" Jul 10 00:02:51.487920 kubelet[2919]: I0710 00:02:51.487891 2919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71a124b1834aaea58bdd7693283c98e1-ca-certs\") pod \"kube-apiserver-ip-172-31-25-230\" (UID: \"71a124b1834aaea58bdd7693283c98e1\") " pod="kube-system/kube-apiserver-ip-172-31-25-230" Jul 10 00:02:51.488045 kubelet[2919]: I0710 00:02:51.487930 2919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36db4881402c1e747565d7206eacac72-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-230\" (UID: \"36db4881402c1e747565d7206eacac72\") " pod="kube-system/kube-controller-manager-ip-172-31-25-230" Jul 10 00:02:51.488045 kubelet[2919]: I0710 00:02:51.487964 2919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/36db4881402c1e747565d7206eacac72-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-230\" (UID: \"36db4881402c1e747565d7206eacac72\") " pod="kube-system/kube-controller-manager-ip-172-31-25-230" Jul 10 00:02:51.488045 kubelet[2919]: I0710 00:02:51.488015 2919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36db4881402c1e747565d7206eacac72-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-230\" (UID: \"36db4881402c1e747565d7206eacac72\") " pod="kube-system/kube-controller-manager-ip-172-31-25-230" Jul 10 00:02:51.488207 kubelet[2919]: I0710 00:02:51.488051 2919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ed0620cc0ad7b2474f5673afd5b00-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-230\" (UID: \"d79ed0620cc0ad7b2474f5673afd5b00\") " pod="kube-system/kube-scheduler-ip-172-31-25-230" Jul 10 00:02:51.488207 kubelet[2919]: I0710 00:02:51.488085 2919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71a124b1834aaea58bdd7693283c98e1-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-230\" (UID: \"71a124b1834aaea58bdd7693283c98e1\") " pod="kube-system/kube-apiserver-ip-172-31-25-230" Jul 10 00:02:51.488449 kubelet[2919]: I0710 00:02:51.488379 2919 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-230" Jul 10 00:02:51.489191 kubelet[2919]: E0710 00:02:51.489145 2919 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.230:6443/api/v1/nodes\": dial tcp 172.31.25.230:6443: connect: connection refused" node="ip-172-31-25-230" Jul 10 00:02:51.489669 kubelet[2919]: E0710 00:02:51.489615 2919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-230?timeout=10s\": dial tcp 172.31.25.230:6443: connect: connection refused" interval="400ms" Jul 10 00:02:51.691927 kubelet[2919]: I0710 00:02:51.691883 2919 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-230" Jul 10 00:02:51.692593 kubelet[2919]: E0710 00:02:51.692542 2919 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.230:6443/api/v1/nodes\": dial tcp 172.31.25.230:6443: connect: connection refused" node="ip-172-31-25-230" Jul 10 00:02:51.757768 containerd[1990]: time="2025-07-10T00:02:51.757616382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-230,Uid:71a124b1834aaea58bdd7693283c98e1,Namespace:kube-system,Attempt:0,}" Jul 10 00:02:51.769376 containerd[1990]: time="2025-07-10T00:02:51.769306554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-230,Uid:36db4881402c1e747565d7206eacac72,Namespace:kube-system,Attempt:0,}" Jul 10 00:02:51.777158 containerd[1990]: time="2025-07-10T00:02:51.777082386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-230,Uid:d79ed0620cc0ad7b2474f5673afd5b00,Namespace:kube-system,Attempt:0,}" Jul 10 00:02:51.820491 containerd[1990]: time="2025-07-10T00:02:51.820364203Z" level=info msg="connecting to shim 1a5ff194d17796f25ae04d2b6dba3a3d784688daa70d353bf5af16324b37d95e" address="unix:///run/containerd/s/64ae6d6e0a732018ed2f69ddcc7e04c570cb39255b0a6e33d3b0565ecc0f4245" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:02:51.881324 containerd[1990]: time="2025-07-10T00:02:51.880900831Z" level=info msg="connecting to shim 41aa6d0b80e4fb4c536cec501da389331cdef1a7e8e74eb68aef0b35480302b4" address="unix:///run/containerd/s/5cefcc594113ecaf838fdfd5d2c79e16d0b920675e1e1f61f2629573ef1933a3" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:02:51.890850 systemd[1]: Started cri-containerd-1a5ff194d17796f25ae04d2b6dba3a3d784688daa70d353bf5af16324b37d95e.scope - libcontainer container 1a5ff194d17796f25ae04d2b6dba3a3d784688daa70d353bf5af16324b37d95e. Jul 10 00:02:51.893309 kubelet[2919]: E0710 00:02:51.893232 2919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-230?timeout=10s\": dial tcp 172.31.25.230:6443: connect: connection refused" interval="800ms" Jul 10 00:02:51.934500 containerd[1990]: time="2025-07-10T00:02:51.934415791Z" level=info msg="connecting to shim 983f5205c8893bf0d9578678f4d7fc9ef9ffbb6f51ca1039f1c3926661e984d0" address="unix:///run/containerd/s/70deeddd152636232986c22ba98e94708562a297fac5fd5c31ee33a2c6435d39" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:02:51.969924 systemd[1]: Started cri-containerd-41aa6d0b80e4fb4c536cec501da389331cdef1a7e8e74eb68aef0b35480302b4.scope - libcontainer container 41aa6d0b80e4fb4c536cec501da389331cdef1a7e8e74eb68aef0b35480302b4. Jul 10 00:02:51.999836 systemd[1]: Started cri-containerd-983f5205c8893bf0d9578678f4d7fc9ef9ffbb6f51ca1039f1c3926661e984d0.scope - libcontainer container 983f5205c8893bf0d9578678f4d7fc9ef9ffbb6f51ca1039f1c3926661e984d0. Jul 10 00:02:52.068518 containerd[1990]: time="2025-07-10T00:02:52.068286604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-230,Uid:71a124b1834aaea58bdd7693283c98e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a5ff194d17796f25ae04d2b6dba3a3d784688daa70d353bf5af16324b37d95e\"" Jul 10 00:02:52.084338 containerd[1990]: time="2025-07-10T00:02:52.084254872Z" level=info msg="CreateContainer within sandbox \"1a5ff194d17796f25ae04d2b6dba3a3d784688daa70d353bf5af16324b37d95e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:02:52.102949 kubelet[2919]: I0710 00:02:52.102816 2919 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-230" Jul 10 00:02:52.105717 kubelet[2919]: E0710 00:02:52.104699 2919 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.230:6443/api/v1/nodes\": dial tcp 172.31.25.230:6443: connect: connection refused" node="ip-172-31-25-230" Jul 10 00:02:52.114164 containerd[1990]: time="2025-07-10T00:02:52.114107524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-230,Uid:36db4881402c1e747565d7206eacac72,Namespace:kube-system,Attempt:0,} returns sandbox id \"41aa6d0b80e4fb4c536cec501da389331cdef1a7e8e74eb68aef0b35480302b4\"" Jul 10 00:02:52.115130 containerd[1990]: time="2025-07-10T00:02:52.114823420Z" level=info msg="Container a0c098c1a64f5bb9fde4ef7a2e45ade194d0a4c2440c4012ba93f2d814c19b2d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:02:52.127332 containerd[1990]: time="2025-07-10T00:02:52.127223644Z" level=info msg="CreateContainer within sandbox \"41aa6d0b80e4fb4c536cec501da389331cdef1a7e8e74eb68aef0b35480302b4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:02:52.138034 containerd[1990]: time="2025-07-10T00:02:52.137957932Z" level=info msg="CreateContainer within sandbox \"1a5ff194d17796f25ae04d2b6dba3a3d784688daa70d353bf5af16324b37d95e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0c098c1a64f5bb9fde4ef7a2e45ade194d0a4c2440c4012ba93f2d814c19b2d\"" Jul 10 00:02:52.139037 containerd[1990]: time="2025-07-10T00:02:52.138965896Z" level=info msg="StartContainer for \"a0c098c1a64f5bb9fde4ef7a2e45ade194d0a4c2440c4012ba93f2d814c19b2d\"" Jul 10 00:02:52.144552 containerd[1990]: time="2025-07-10T00:02:52.144349936Z" level=info msg="connecting to shim a0c098c1a64f5bb9fde4ef7a2e45ade194d0a4c2440c4012ba93f2d814c19b2d" address="unix:///run/containerd/s/64ae6d6e0a732018ed2f69ddcc7e04c570cb39255b0a6e33d3b0565ecc0f4245" protocol=ttrpc version=3 Jul 10 00:02:52.150002 containerd[1990]: time="2025-07-10T00:02:52.149935264Z" level=info msg="Container 6976546e0ce54b4ac0edf95fb7114da9648ec996afecac809a041f9c2cac9156: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:02:52.151702 containerd[1990]: time="2025-07-10T00:02:52.151595980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-230,Uid:d79ed0620cc0ad7b2474f5673afd5b00,Namespace:kube-system,Attempt:0,} returns sandbox id \"983f5205c8893bf0d9578678f4d7fc9ef9ffbb6f51ca1039f1c3926661e984d0\"" Jul 10 00:02:52.160146 containerd[1990]: time="2025-07-10T00:02:52.159504016Z" level=info msg="CreateContainer within sandbox \"983f5205c8893bf0d9578678f4d7fc9ef9ffbb6f51ca1039f1c3926661e984d0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:02:52.169462 containerd[1990]: time="2025-07-10T00:02:52.169285432Z" level=info msg="CreateContainer within sandbox \"41aa6d0b80e4fb4c536cec501da389331cdef1a7e8e74eb68aef0b35480302b4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6976546e0ce54b4ac0edf95fb7114da9648ec996afecac809a041f9c2cac9156\"" Jul 10 00:02:52.170820 containerd[1990]: time="2025-07-10T00:02:52.170774428Z" level=info msg="StartContainer for \"6976546e0ce54b4ac0edf95fb7114da9648ec996afecac809a041f9c2cac9156\"" Jul 10 00:02:52.173617 containerd[1990]: time="2025-07-10T00:02:52.173562976Z" level=info msg="connecting to shim 6976546e0ce54b4ac0edf95fb7114da9648ec996afecac809a041f9c2cac9156" address="unix:///run/containerd/s/5cefcc594113ecaf838fdfd5d2c79e16d0b920675e1e1f61f2629573ef1933a3" protocol=ttrpc version=3 Jul 10 00:02:52.188848 systemd[1]: Started cri-containerd-a0c098c1a64f5bb9fde4ef7a2e45ade194d0a4c2440c4012ba93f2d814c19b2d.scope - libcontainer container a0c098c1a64f5bb9fde4ef7a2e45ade194d0a4c2440c4012ba93f2d814c19b2d. Jul 10 00:02:52.200371 containerd[1990]: time="2025-07-10T00:02:52.200212744Z" level=info msg="Container 3fd7bc1d507cf8ba7348b8347f68f2bbe0b09c6beaf9652c47ed2e90e319d347: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:02:52.222458 containerd[1990]: time="2025-07-10T00:02:52.222359969Z" level=info msg="CreateContainer within sandbox \"983f5205c8893bf0d9578678f4d7fc9ef9ffbb6f51ca1039f1c3926661e984d0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3fd7bc1d507cf8ba7348b8347f68f2bbe0b09c6beaf9652c47ed2e90e319d347\"" Jul 10 00:02:52.227271 containerd[1990]: time="2025-07-10T00:02:52.225881237Z" level=info msg="StartContainer for \"3fd7bc1d507cf8ba7348b8347f68f2bbe0b09c6beaf9652c47ed2e90e319d347\"" Jul 10 00:02:52.228156 containerd[1990]: time="2025-07-10T00:02:52.228108233Z" level=info msg="connecting to shim 3fd7bc1d507cf8ba7348b8347f68f2bbe0b09c6beaf9652c47ed2e90e319d347" address="unix:///run/containerd/s/70deeddd152636232986c22ba98e94708562a297fac5fd5c31ee33a2c6435d39" protocol=ttrpc version=3 Jul 10 00:02:52.230959 systemd[1]: Started cri-containerd-6976546e0ce54b4ac0edf95fb7114da9648ec996afecac809a041f9c2cac9156.scope - libcontainer container 6976546e0ce54b4ac0edf95fb7114da9648ec996afecac809a041f9c2cac9156. Jul 10 00:02:52.278544 systemd[1]: Started cri-containerd-3fd7bc1d507cf8ba7348b8347f68f2bbe0b09c6beaf9652c47ed2e90e319d347.scope - libcontainer container 3fd7bc1d507cf8ba7348b8347f68f2bbe0b09c6beaf9652c47ed2e90e319d347. Jul 10 00:02:52.361851 containerd[1990]: time="2025-07-10T00:02:52.360334649Z" level=info msg="StartContainer for \"a0c098c1a64f5bb9fde4ef7a2e45ade194d0a4c2440c4012ba93f2d814c19b2d\" returns successfully" Jul 10 00:02:52.454161 containerd[1990]: time="2025-07-10T00:02:52.454101858Z" level=info msg="StartContainer for \"6976546e0ce54b4ac0edf95fb7114da9648ec996afecac809a041f9c2cac9156\" returns successfully" Jul 10 00:02:52.479165 containerd[1990]: time="2025-07-10T00:02:52.477962934Z" level=info msg="StartContainer for \"3fd7bc1d507cf8ba7348b8347f68f2bbe0b09c6beaf9652c47ed2e90e319d347\" returns successfully" Jul 10 00:02:52.498598 kubelet[2919]: W0710 00:02:52.498512 2919 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.230:6443: connect: connection refused Jul 10 00:02:52.499113 kubelet[2919]: E0710 00:02:52.498611 2919 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.230:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:02:52.908677 kubelet[2919]: I0710 00:02:52.908596 2919 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-230" Jul 10 00:02:53.378853 kubelet[2919]: E0710 00:02:53.378539 2919 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-230\" not found" node="ip-172-31-25-230" Jul 10 00:02:53.390878 kubelet[2919]: E0710 00:02:53.389372 2919 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-230\" not found" node="ip-172-31-25-230" Jul 10 00:02:53.391827 kubelet[2919]: E0710 00:02:53.391795 2919 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-230\" not found" node="ip-172-31-25-230" Jul 10 00:02:54.395883 kubelet[2919]: E0710 00:02:54.394914 2919 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-230\" not found" node="ip-172-31-25-230" Jul 10 00:02:54.397478 kubelet[2919]: E0710 00:02:54.396563 2919 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-230\" not found" node="ip-172-31-25-230" Jul 10 00:02:54.398031 kubelet[2919]: E0710 00:02:54.398002 2919 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-230\" not found" node="ip-172-31-25-230" Jul 10 00:02:55.396218 kubelet[2919]: E0710 00:02:55.395520 2919 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-230\" not found" node="ip-172-31-25-230" Jul 10 00:02:55.396218 kubelet[2919]: E0710 00:02:55.396011 2919 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-230\" not found" node="ip-172-31-25-230" Jul 10 00:02:55.397852 kubelet[2919]: E0710 00:02:55.397821 2919 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-230\" not found" node="ip-172-31-25-230" Jul 10 00:02:56.457451 kubelet[2919]: E0710 00:02:56.456525 2919 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-230\" not found" node="ip-172-31-25-230" Jul 10 00:02:56.763756 kubelet[2919]: E0710 00:02:56.763692 2919 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-230\" not found" node="ip-172-31-25-230" Jul 10 00:02:56.905762 kubelet[2919]: I0710 00:02:56.905450 2919 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-25-230" Jul 10 00:02:56.979462 kubelet[2919]: I0710 00:02:56.979355 2919 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-230" Jul 10 00:02:57.032981 kubelet[2919]: E0710 00:02:57.032596 2919 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-25-230" Jul 10 00:02:57.032981 kubelet[2919]: I0710 00:02:57.032642 2919 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-230" Jul 10 00:02:57.039891 kubelet[2919]: E0710 00:02:57.039704 2919 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-25-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-25-230" Jul 10 00:02:57.039891 kubelet[2919]: I0710 00:02:57.039746 2919 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-230" Jul 10 00:02:57.049924 kubelet[2919]: E0710 00:02:57.049871 2919 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-25-230" Jul 10 00:02:57.262903 kubelet[2919]: I0710 00:02:57.262766 2919 apiserver.go:52] "Watching apiserver" Jul 10 00:02:57.286373 kubelet[2919]: I0710 00:02:57.285708 2919 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:02:59.159781 systemd[1]: Reload requested from client PID 3194 ('systemctl') (unit session-7.scope)... Jul 10 00:02:59.159813 systemd[1]: Reloading... Jul 10 00:02:59.370430 zram_generator::config[3241]: No configuration found. Jul 10 00:02:59.587060 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:02:59.878518 systemd[1]: Reloading finished in 718 ms. Jul 10 00:02:59.929695 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:02:59.953308 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:02:59.954519 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:02:59.954603 systemd[1]: kubelet.service: Consumed 1.828s CPU time, 126.5M memory peak. Jul 10 00:02:59.958862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:03:00.319231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:03:00.341073 (kubelet)[3298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:03:00.441334 kubelet[3298]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:03:00.441334 kubelet[3298]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:03:00.441850 kubelet[3298]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:03:00.441850 kubelet[3298]: I0710 00:03:00.441510 3298 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:03:00.467739 kubelet[3298]: I0710 00:03:00.467674 3298 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 00:03:00.467739 kubelet[3298]: I0710 00:03:00.467725 3298 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:03:00.468811 kubelet[3298]: I0710 00:03:00.468584 3298 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 00:03:00.471661 kubelet[3298]: I0710 00:03:00.471579 3298 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:03:00.476225 kubelet[3298]: I0710 00:03:00.476170 3298 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:03:00.485071 kubelet[3298]: I0710 00:03:00.485024 3298 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:03:00.491697 kubelet[3298]: I0710 00:03:00.491649 3298 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:03:00.492202 kubelet[3298]: I0710 00:03:00.492143 3298 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:03:00.492539 kubelet[3298]: I0710 00:03:00.492197 3298 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-230","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:03:00.492691 kubelet[3298]: I0710 00:03:00.492553 3298 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:03:00.492691 kubelet[3298]: I0710 00:03:00.492581 3298 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 00:03:00.492691 kubelet[3298]: I0710 00:03:00.492659 3298 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:03:00.493283 kubelet[3298]: I0710 00:03:00.492891 3298 kubelet.go:446] "Attempting to sync node with API server" Jul 10 00:03:00.495119 kubelet[3298]: I0710 00:03:00.494442 3298 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:03:00.495119 kubelet[3298]: I0710 00:03:00.494538 3298 kubelet.go:352] "Adding apiserver pod source" Jul 10 00:03:00.495119 kubelet[3298]: I0710 00:03:00.494563 3298 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:03:00.498018 kubelet[3298]: I0710 00:03:00.497983 3298 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:03:00.502557 kubelet[3298]: I0710 00:03:00.500282 3298 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:03:00.504453 kubelet[3298]: I0710 00:03:00.504420 3298 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:03:00.504951 kubelet[3298]: I0710 00:03:00.504615 3298 server.go:1287] "Started kubelet" Jul 10 00:03:00.518298 kubelet[3298]: I0710 00:03:00.518245 3298 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:03:00.524852 kubelet[3298]: I0710 00:03:00.524748 3298 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:03:00.538411 kubelet[3298]: I0710 00:03:00.537450 3298 server.go:479] "Adding debug handlers to kubelet server" Jul 10 00:03:00.546177 kubelet[3298]: I0710 00:03:00.546023 3298 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:03:00.548021 kubelet[3298]: I0710 00:03:00.547483 3298 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:03:00.555370 kubelet[3298]: I0710 00:03:00.555319 3298 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:03:00.556547 kubelet[3298]: E0710 00:03:00.555568 3298 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-25-230\" not found" Jul 10 00:03:00.559349 kubelet[3298]: I0710 00:03:00.558571 3298 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:03:00.559349 kubelet[3298]: I0710 00:03:00.558826 3298 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:03:00.559349 kubelet[3298]: I0710 00:03:00.559312 3298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:03:00.562602 kubelet[3298]: I0710 00:03:00.562546 3298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:03:00.562602 kubelet[3298]: I0710 00:03:00.562604 3298 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 00:03:00.562799 kubelet[3298]: I0710 00:03:00.562639 3298 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:03:00.562799 kubelet[3298]: I0710 00:03:00.562653 3298 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 00:03:00.562799 kubelet[3298]: E0710 00:03:00.562720 3298 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:03:00.601614 kubelet[3298]: I0710 00:03:00.600842 3298 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:03:00.601614 kubelet[3298]: I0710 00:03:00.601597 3298 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:03:00.603621 kubelet[3298]: I0710 00:03:00.603563 3298 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:03:00.608790 kubelet[3298]: E0710 00:03:00.608581 3298 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:03:00.611265 kubelet[3298]: I0710 00:03:00.611174 3298 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:03:00.664050 kubelet[3298]: E0710 00:03:00.662785 3298 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:03:00.735245 kubelet[3298]: I0710 00:03:00.735018 3298 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:03:00.735245 kubelet[3298]: I0710 00:03:00.735049 3298 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:03:00.735245 kubelet[3298]: I0710 00:03:00.735081 3298 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:03:00.737108 kubelet[3298]: I0710 00:03:00.735786 3298 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:03:00.737108 kubelet[3298]: I0710 00:03:00.735814 3298 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:03:00.737108 kubelet[3298]: I0710 00:03:00.735847 3298 policy_none.go:49] "None policy: Start" Jul 10 00:03:00.737108 kubelet[3298]: I0710 00:03:00.735867 3298 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:03:00.737108 kubelet[3298]: I0710 00:03:00.735887 3298 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:03:00.737108 kubelet[3298]: I0710 00:03:00.736061 3298 state_mem.go:75] "Updated machine memory state" Jul 10 00:03:00.746038 kubelet[3298]: I0710 00:03:00.745990 3298 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:03:00.748515 kubelet[3298]: I0710 00:03:00.748486 3298 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:03:00.750352 kubelet[3298]: I0710 00:03:00.749656 3298 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:03:00.752881 kubelet[3298]: I0710 00:03:00.752848 3298 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:03:00.756703 kubelet[3298]: E0710 00:03:00.756665 3298 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:03:00.865071 kubelet[3298]: I0710 00:03:00.863838 3298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-230" Jul 10 00:03:00.866121 kubelet[3298]: I0710 00:03:00.866081 3298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-230" Jul 10 00:03:00.867639 kubelet[3298]: I0710 00:03:00.867495 3298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-230" Jul 10 00:03:00.875865 kubelet[3298]: I0710 00:03:00.875087 3298 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-230" Jul 10 00:03:00.898554 kubelet[3298]: I0710 00:03:00.898513 3298 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-25-230" Jul 10 00:03:00.898851 kubelet[3298]: I0710 00:03:00.898831 3298 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-25-230" Jul 10 00:03:00.961280 kubelet[3298]: I0710 00:03:00.961202 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36db4881402c1e747565d7206eacac72-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-230\" (UID: \"36db4881402c1e747565d7206eacac72\") " pod="kube-system/kube-controller-manager-ip-172-31-25-230" Jul 10 00:03:00.961851 kubelet[3298]: I0710 00:03:00.961712 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71a124b1834aaea58bdd7693283c98e1-ca-certs\") pod \"kube-apiserver-ip-172-31-25-230\" (UID: \"71a124b1834aaea58bdd7693283c98e1\") " pod="kube-system/kube-apiserver-ip-172-31-25-230" Jul 10 00:03:00.962099 kubelet[3298]: I0710 00:03:00.962003 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36db4881402c1e747565d7206eacac72-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-230\" (UID: \"36db4881402c1e747565d7206eacac72\") " pod="kube-system/kube-controller-manager-ip-172-31-25-230" Jul 10 00:03:00.963026 kubelet[3298]: I0710 00:03:00.962953 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/36db4881402c1e747565d7206eacac72-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-230\" (UID: \"36db4881402c1e747565d7206eacac72\") " pod="kube-system/kube-controller-manager-ip-172-31-25-230" Jul 10 00:03:00.963661 kubelet[3298]: I0710 00:03:00.963374 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36db4881402c1e747565d7206eacac72-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-230\" (UID: \"36db4881402c1e747565d7206eacac72\") " pod="kube-system/kube-controller-manager-ip-172-31-25-230" Jul 10 00:03:00.964080 kubelet[3298]: I0710 00:03:00.963730 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/36db4881402c1e747565d7206eacac72-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-230\" (UID: \"36db4881402c1e747565d7206eacac72\") " pod="kube-system/kube-controller-manager-ip-172-31-25-230" Jul 10 00:03:00.964509 kubelet[3298]: I0710 00:03:00.964336 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ed0620cc0ad7b2474f5673afd5b00-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-230\" (UID: \"d79ed0620cc0ad7b2474f5673afd5b00\") " pod="kube-system/kube-scheduler-ip-172-31-25-230" Jul 10 00:03:00.964976 kubelet[3298]: I0710 00:03:00.964773 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71a124b1834aaea58bdd7693283c98e1-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-230\" (UID: \"71a124b1834aaea58bdd7693283c98e1\") " pod="kube-system/kube-apiserver-ip-172-31-25-230" Jul 10 00:03:00.965352 kubelet[3298]: I0710 00:03:00.964821 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71a124b1834aaea58bdd7693283c98e1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-230\" (UID: \"71a124b1834aaea58bdd7693283c98e1\") " pod="kube-system/kube-apiserver-ip-172-31-25-230" Jul 10 00:03:01.496617 kubelet[3298]: I0710 00:03:01.496215 3298 apiserver.go:52] "Watching apiserver" Jul 10 00:03:01.558924 kubelet[3298]: I0710 00:03:01.558840 3298 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:03:01.679105 kubelet[3298]: I0710 00:03:01.679017 3298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-230" Jul 10 00:03:01.682477 kubelet[3298]: I0710 00:03:01.681655 3298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-230" Jul 10 00:03:01.698089 kubelet[3298]: E0710 00:03:01.698027 3298 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-230\" already exists" pod="kube-system/kube-apiserver-ip-172-31-25-230" Jul 10 00:03:01.703072 kubelet[3298]: E0710 00:03:01.702972 3298 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-230\" already exists" pod="kube-system/kube-scheduler-ip-172-31-25-230" Jul 10 00:03:01.743859 kubelet[3298]: I0710 00:03:01.743701 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-230" podStartSLOduration=1.743676448 podStartE2EDuration="1.743676448s" podCreationTimestamp="2025-07-10 00:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:03:01.72700072 +0000 UTC m=+1.376048468" watchObservedRunningTime="2025-07-10 00:03:01.743676448 +0000 UTC m=+1.392724172" Jul 10 00:03:01.764420 kubelet[3298]: I0710 00:03:01.764295 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-230" podStartSLOduration=1.763378756 podStartE2EDuration="1.763378756s" podCreationTimestamp="2025-07-10 00:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:03:01.746651716 +0000 UTC m=+1.395699428" watchObservedRunningTime="2025-07-10 00:03:01.763378756 +0000 UTC m=+1.412426480" Jul 10 00:03:01.783585 kubelet[3298]: I0710 00:03:01.783485 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-230" podStartSLOduration=1.783465508 podStartE2EDuration="1.783465508s" podCreationTimestamp="2025-07-10 00:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:03:01.764791816 +0000 UTC m=+1.413839516" watchObservedRunningTime="2025-07-10 00:03:01.783465508 +0000 UTC m=+1.432513208" Jul 10 00:03:04.610568 update_engine[1982]: I20250710 00:03:04.610419 1982 update_attempter.cc:509] Updating boot flags... Jul 10 00:03:05.607560 kubelet[3298]: I0710 00:03:05.607514 3298 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:03:05.608736 containerd[1990]: time="2025-07-10T00:03:05.608692051Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:03:05.610237 kubelet[3298]: I0710 00:03:05.609286 3298 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:03:06.498757 systemd[1]: Created slice kubepods-besteffort-pod284283f3_f07c_458f_96d0_2d6ae0887c9a.slice - libcontainer container kubepods-besteffort-pod284283f3_f07c_458f_96d0_2d6ae0887c9a.slice. Jul 10 00:03:06.502788 kubelet[3298]: I0710 00:03:06.502573 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/284283f3-f07c-458f-96d0-2d6ae0887c9a-lib-modules\") pod \"kube-proxy-tn7rq\" (UID: \"284283f3-f07c-458f-96d0-2d6ae0887c9a\") " pod="kube-system/kube-proxy-tn7rq" Jul 10 00:03:06.502788 kubelet[3298]: I0710 00:03:06.502631 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/284283f3-f07c-458f-96d0-2d6ae0887c9a-kube-proxy\") pod \"kube-proxy-tn7rq\" (UID: \"284283f3-f07c-458f-96d0-2d6ae0887c9a\") " pod="kube-system/kube-proxy-tn7rq" Jul 10 00:03:06.502788 kubelet[3298]: I0710 00:03:06.502672 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/284283f3-f07c-458f-96d0-2d6ae0887c9a-xtables-lock\") pod \"kube-proxy-tn7rq\" (UID: \"284283f3-f07c-458f-96d0-2d6ae0887c9a\") " pod="kube-system/kube-proxy-tn7rq" Jul 10 00:03:06.502788 kubelet[3298]: I0710 00:03:06.502709 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlgsx\" (UniqueName: \"kubernetes.io/projected/284283f3-f07c-458f-96d0-2d6ae0887c9a-kube-api-access-dlgsx\") pod \"kube-proxy-tn7rq\" (UID: \"284283f3-f07c-458f-96d0-2d6ae0887c9a\") " pod="kube-system/kube-proxy-tn7rq" Jul 10 00:03:06.682149 systemd[1]: Created slice kubepods-besteffort-pod0774a03e_32af_4f52_8806_dbc380e98322.slice - libcontainer container kubepods-besteffort-pod0774a03e_32af_4f52_8806_dbc380e98322.slice. Jul 10 00:03:06.705999 kubelet[3298]: I0710 00:03:06.705859 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pht2\" (UniqueName: \"kubernetes.io/projected/0774a03e-32af-4f52-8806-dbc380e98322-kube-api-access-5pht2\") pod \"tigera-operator-747864d56d-2q5pk\" (UID: \"0774a03e-32af-4f52-8806-dbc380e98322\") " pod="tigera-operator/tigera-operator-747864d56d-2q5pk" Jul 10 00:03:06.705999 kubelet[3298]: I0710 00:03:06.705929 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0774a03e-32af-4f52-8806-dbc380e98322-var-lib-calico\") pod \"tigera-operator-747864d56d-2q5pk\" (UID: \"0774a03e-32af-4f52-8806-dbc380e98322\") " pod="tigera-operator/tigera-operator-747864d56d-2q5pk" Jul 10 00:03:06.817401 containerd[1990]: time="2025-07-10T00:03:06.817318737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tn7rq,Uid:284283f3-f07c-458f-96d0-2d6ae0887c9a,Namespace:kube-system,Attempt:0,}" Jul 10 00:03:06.866424 containerd[1990]: time="2025-07-10T00:03:06.866206917Z" level=info msg="connecting to shim f81e46051b191e86d863e77c64fa157af4bd84c8a559bb108d9f6bf04e0e684e" address="unix:///run/containerd/s/b8e85e79c2bf78eb25d0764d9555193096ee7ad41ee92e4f3c4151935292b780" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:03:06.918701 systemd[1]: Started cri-containerd-f81e46051b191e86d863e77c64fa157af4bd84c8a559bb108d9f6bf04e0e684e.scope - libcontainer container f81e46051b191e86d863e77c64fa157af4bd84c8a559bb108d9f6bf04e0e684e. Jul 10 00:03:06.975589 containerd[1990]: time="2025-07-10T00:03:06.975453670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tn7rq,Uid:284283f3-f07c-458f-96d0-2d6ae0887c9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f81e46051b191e86d863e77c64fa157af4bd84c8a559bb108d9f6bf04e0e684e\"" Jul 10 00:03:06.983276 containerd[1990]: time="2025-07-10T00:03:06.983211934Z" level=info msg="CreateContainer within sandbox \"f81e46051b191e86d863e77c64fa157af4bd84c8a559bb108d9f6bf04e0e684e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:03:06.992646 containerd[1990]: time="2025-07-10T00:03:06.992533138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-2q5pk,Uid:0774a03e-32af-4f52-8806-dbc380e98322,Namespace:tigera-operator,Attempt:0,}" Jul 10 00:03:07.010975 containerd[1990]: time="2025-07-10T00:03:07.010822494Z" level=info msg="Container 01505925ad477076632a6bfb1e93fe1384b45bcaa449fa7abb5ee02b8008e654: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:03:07.029761 containerd[1990]: time="2025-07-10T00:03:07.029645154Z" level=info msg="CreateContainer within sandbox \"f81e46051b191e86d863e77c64fa157af4bd84c8a559bb108d9f6bf04e0e684e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"01505925ad477076632a6bfb1e93fe1384b45bcaa449fa7abb5ee02b8008e654\"" Jul 10 00:03:07.031099 containerd[1990]: time="2025-07-10T00:03:07.031031934Z" level=info msg="StartContainer for \"01505925ad477076632a6bfb1e93fe1384b45bcaa449fa7abb5ee02b8008e654\"" Jul 10 00:03:07.035506 containerd[1990]: time="2025-07-10T00:03:07.035434818Z" level=info msg="connecting to shim 01505925ad477076632a6bfb1e93fe1384b45bcaa449fa7abb5ee02b8008e654" address="unix:///run/containerd/s/b8e85e79c2bf78eb25d0764d9555193096ee7ad41ee92e4f3c4151935292b780" protocol=ttrpc version=3 Jul 10 00:03:07.062705 containerd[1990]: time="2025-07-10T00:03:07.062353026Z" level=info msg="connecting to shim c2b0d9157a5dc212d516bb23a8f3b998b5c69890400c7374246d6840d9314296" address="unix:///run/containerd/s/d03f7466a718fc407ae638b71015e66dfbc89e3538dbf88623b5964230385291" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:03:07.086711 systemd[1]: Started cri-containerd-01505925ad477076632a6bfb1e93fe1384b45bcaa449fa7abb5ee02b8008e654.scope - libcontainer container 01505925ad477076632a6bfb1e93fe1384b45bcaa449fa7abb5ee02b8008e654. Jul 10 00:03:07.126768 systemd[1]: Started cri-containerd-c2b0d9157a5dc212d516bb23a8f3b998b5c69890400c7374246d6840d9314296.scope - libcontainer container c2b0d9157a5dc212d516bb23a8f3b998b5c69890400c7374246d6840d9314296. Jul 10 00:03:07.221316 containerd[1990]: time="2025-07-10T00:03:07.220358455Z" level=info msg="StartContainer for \"01505925ad477076632a6bfb1e93fe1384b45bcaa449fa7abb5ee02b8008e654\" returns successfully" Jul 10 00:03:07.271545 containerd[1990]: time="2025-07-10T00:03:07.271354627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-2q5pk,Uid:0774a03e-32af-4f52-8806-dbc380e98322,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c2b0d9157a5dc212d516bb23a8f3b998b5c69890400c7374246d6840d9314296\"" Jul 10 00:03:07.276785 containerd[1990]: time="2025-07-10T00:03:07.276725563Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 00:03:08.627601 kubelet[3298]: I0710 00:03:08.627466 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tn7rq" podStartSLOduration=2.627439942 podStartE2EDuration="2.627439942s" podCreationTimestamp="2025-07-10 00:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:03:07.728898706 +0000 UTC m=+7.377946442" watchObservedRunningTime="2025-07-10 00:03:08.627439942 +0000 UTC m=+8.276487690" Jul 10 00:03:08.922069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount957823032.mount: Deactivated successfully. Jul 10 00:03:09.626188 containerd[1990]: time="2025-07-10T00:03:09.624435551Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:09.627790 containerd[1990]: time="2025-07-10T00:03:09.626279891Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 10 00:03:09.627790 containerd[1990]: time="2025-07-10T00:03:09.627766295Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:09.632376 containerd[1990]: time="2025-07-10T00:03:09.632298083Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:09.634989 containerd[1990]: time="2025-07-10T00:03:09.634805771Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.357787264s" Jul 10 00:03:09.634989 containerd[1990]: time="2025-07-10T00:03:09.634857755Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 10 00:03:09.639277 containerd[1990]: time="2025-07-10T00:03:09.639209807Z" level=info msg="CreateContainer within sandbox \"c2b0d9157a5dc212d516bb23a8f3b998b5c69890400c7374246d6840d9314296\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 00:03:09.654362 containerd[1990]: time="2025-07-10T00:03:09.653440847Z" level=info msg="Container 76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:03:09.663338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2037701810.mount: Deactivated successfully. Jul 10 00:03:09.668413 containerd[1990]: time="2025-07-10T00:03:09.668269907Z" level=info msg="CreateContainer within sandbox \"c2b0d9157a5dc212d516bb23a8f3b998b5c69890400c7374246d6840d9314296\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560\"" Jul 10 00:03:09.670676 containerd[1990]: time="2025-07-10T00:03:09.670587707Z" level=info msg="StartContainer for \"76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560\"" Jul 10 00:03:09.673143 containerd[1990]: time="2025-07-10T00:03:09.673052291Z" level=info msg="connecting to shim 76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560" address="unix:///run/containerd/s/d03f7466a718fc407ae638b71015e66dfbc89e3538dbf88623b5964230385291" protocol=ttrpc version=3 Jul 10 00:03:09.714900 systemd[1]: Started cri-containerd-76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560.scope - libcontainer container 76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560. Jul 10 00:03:09.776082 containerd[1990]: time="2025-07-10T00:03:09.776010468Z" level=info msg="StartContainer for \"76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560\" returns successfully" Jul 10 00:03:11.096361 kubelet[3298]: I0710 00:03:11.096121 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-2q5pk" podStartSLOduration=2.7339226500000002 podStartE2EDuration="5.096097342s" podCreationTimestamp="2025-07-10 00:03:06 +0000 UTC" firstStartedPulling="2025-07-10 00:03:07.274186627 +0000 UTC m=+6.923234339" lastFinishedPulling="2025-07-10 00:03:09.636361331 +0000 UTC m=+9.285409031" observedRunningTime="2025-07-10 00:03:10.752451373 +0000 UTC m=+10.401499097" watchObservedRunningTime="2025-07-10 00:03:11.096097342 +0000 UTC m=+10.745145054" Jul 10 00:03:16.790917 sudo[2346]: pam_unix(sudo:session): session closed for user root Jul 10 00:03:16.814530 sshd[2345]: Connection closed by 139.178.89.65 port 39782 Jul 10 00:03:16.815343 sshd-session[2343]: pam_unix(sshd:session): session closed for user core Jul 10 00:03:16.823934 systemd[1]: sshd@6-172.31.25.230:22-139.178.89.65:39782.service: Deactivated successfully. Jul 10 00:03:16.834585 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:03:16.837528 systemd[1]: session-7.scope: Consumed 10.770s CPU time, 234.5M memory peak. Jul 10 00:03:16.846409 systemd-logind[1981]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:03:16.854257 systemd-logind[1981]: Removed session 7. Jul 10 00:03:31.388473 systemd[1]: Created slice kubepods-besteffort-pod2cc2ff39_3e12_4549_8768_cc9b7c036b8e.slice - libcontainer container kubepods-besteffort-pod2cc2ff39_3e12_4549_8768_cc9b7c036b8e.slice. Jul 10 00:03:31.478587 kubelet[3298]: I0710 00:03:31.478379 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cc2ff39-3e12-4549-8768-cc9b7c036b8e-tigera-ca-bundle\") pod \"calico-typha-67564c7b44-w4cjs\" (UID: \"2cc2ff39-3e12-4549-8768-cc9b7c036b8e\") " pod="calico-system/calico-typha-67564c7b44-w4cjs" Jul 10 00:03:31.478587 kubelet[3298]: I0710 00:03:31.478487 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2cc2ff39-3e12-4549-8768-cc9b7c036b8e-typha-certs\") pod \"calico-typha-67564c7b44-w4cjs\" (UID: \"2cc2ff39-3e12-4549-8768-cc9b7c036b8e\") " pod="calico-system/calico-typha-67564c7b44-w4cjs" Jul 10 00:03:31.478587 kubelet[3298]: I0710 00:03:31.478532 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj7gj\" (UniqueName: \"kubernetes.io/projected/2cc2ff39-3e12-4549-8768-cc9b7c036b8e-kube-api-access-cj7gj\") pod \"calico-typha-67564c7b44-w4cjs\" (UID: \"2cc2ff39-3e12-4549-8768-cc9b7c036b8e\") " pod="calico-system/calico-typha-67564c7b44-w4cjs" Jul 10 00:03:31.886887 systemd[1]: Created slice kubepods-besteffort-pod77f66d9e_adc9_4a54_a2b3_2f3c3f13555b.slice - libcontainer container kubepods-besteffort-pod77f66d9e_adc9_4a54_a2b3_2f3c3f13555b.slice. Jul 10 00:03:31.980979 kubelet[3298]: I0710 00:03:31.980915 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/77f66d9e-adc9-4a54-a2b3-2f3c3f13555b-cni-log-dir\") pod \"calico-node-llk65\" (UID: \"77f66d9e-adc9-4a54-a2b3-2f3c3f13555b\") " pod="calico-system/calico-node-llk65" Jul 10 00:03:31.980979 kubelet[3298]: I0710 00:03:31.980984 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/77f66d9e-adc9-4a54-a2b3-2f3c3f13555b-policysync\") pod \"calico-node-llk65\" (UID: \"77f66d9e-adc9-4a54-a2b3-2f3c3f13555b\") " pod="calico-system/calico-node-llk65" Jul 10 00:03:31.981181 kubelet[3298]: I0710 00:03:31.981022 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/77f66d9e-adc9-4a54-a2b3-2f3c3f13555b-var-lib-calico\") pod \"calico-node-llk65\" (UID: \"77f66d9e-adc9-4a54-a2b3-2f3c3f13555b\") " pod="calico-system/calico-node-llk65" Jul 10 00:03:31.981181 kubelet[3298]: I0710 00:03:31.981063 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/77f66d9e-adc9-4a54-a2b3-2f3c3f13555b-cni-bin-dir\") pod \"calico-node-llk65\" (UID: \"77f66d9e-adc9-4a54-a2b3-2f3c3f13555b\") " pod="calico-system/calico-node-llk65" Jul 10 00:03:31.981181 kubelet[3298]: I0710 00:03:31.981101 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77f66d9e-adc9-4a54-a2b3-2f3c3f13555b-lib-modules\") pod \"calico-node-llk65\" (UID: \"77f66d9e-adc9-4a54-a2b3-2f3c3f13555b\") " pod="calico-system/calico-node-llk65" Jul 10 00:03:31.981181 kubelet[3298]: I0710 00:03:31.981137 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/77f66d9e-adc9-4a54-a2b3-2f3c3f13555b-node-certs\") pod \"calico-node-llk65\" (UID: \"77f66d9e-adc9-4a54-a2b3-2f3c3f13555b\") " pod="calico-system/calico-node-llk65" Jul 10 00:03:31.981407 kubelet[3298]: I0710 00:03:31.981178 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77f66d9e-adc9-4a54-a2b3-2f3c3f13555b-xtables-lock\") pod \"calico-node-llk65\" (UID: \"77f66d9e-adc9-4a54-a2b3-2f3c3f13555b\") " pod="calico-system/calico-node-llk65" Jul 10 00:03:31.981407 kubelet[3298]: I0710 00:03:31.981216 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrcdq\" (UniqueName: \"kubernetes.io/projected/77f66d9e-adc9-4a54-a2b3-2f3c3f13555b-kube-api-access-lrcdq\") pod \"calico-node-llk65\" (UID: \"77f66d9e-adc9-4a54-a2b3-2f3c3f13555b\") " pod="calico-system/calico-node-llk65" Jul 10 00:03:31.981407 kubelet[3298]: I0710 00:03:31.981257 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/77f66d9e-adc9-4a54-a2b3-2f3c3f13555b-cni-net-dir\") pod \"calico-node-llk65\" (UID: \"77f66d9e-adc9-4a54-a2b3-2f3c3f13555b\") " pod="calico-system/calico-node-llk65" Jul 10 00:03:31.981407 kubelet[3298]: I0710 00:03:31.981304 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/77f66d9e-adc9-4a54-a2b3-2f3c3f13555b-flexvol-driver-host\") pod \"calico-node-llk65\" (UID: \"77f66d9e-adc9-4a54-a2b3-2f3c3f13555b\") " pod="calico-system/calico-node-llk65" Jul 10 00:03:31.981407 kubelet[3298]: I0710 00:03:31.981338 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/77f66d9e-adc9-4a54-a2b3-2f3c3f13555b-var-run-calico\") pod \"calico-node-llk65\" (UID: \"77f66d9e-adc9-4a54-a2b3-2f3c3f13555b\") " pod="calico-system/calico-node-llk65" Jul 10 00:03:31.981680 kubelet[3298]: I0710 00:03:31.981374 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77f66d9e-adc9-4a54-a2b3-2f3c3f13555b-tigera-ca-bundle\") pod \"calico-node-llk65\" (UID: \"77f66d9e-adc9-4a54-a2b3-2f3c3f13555b\") " pod="calico-system/calico-node-llk65" Jul 10 00:03:31.997620 containerd[1990]: time="2025-07-10T00:03:31.997558798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67564c7b44-w4cjs,Uid:2cc2ff39-3e12-4549-8768-cc9b7c036b8e,Namespace:calico-system,Attempt:0,}" Jul 10 00:03:32.059790 containerd[1990]: time="2025-07-10T00:03:32.059679234Z" level=info msg="connecting to shim 90c89badc7af36322123d2003ae1bf7eec924680780eee7990f89dc3c05d2a1b" address="unix:///run/containerd/s/0c4d71b8aba32523af38228e060e751695a558a5eb772314cf0c9cbc11a7beaf" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:03:32.104484 kubelet[3298]: E0710 00:03:32.103755 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.104484 kubelet[3298]: W0710 00:03:32.103794 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.104484 kubelet[3298]: E0710 00:03:32.103839 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.116736 kubelet[3298]: E0710 00:03:32.116601 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.116736 kubelet[3298]: W0710 00:03:32.116646 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.116736 kubelet[3298]: E0710 00:03:32.116679 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.154735 systemd[1]: Started cri-containerd-90c89badc7af36322123d2003ae1bf7eec924680780eee7990f89dc3c05d2a1b.scope - libcontainer container 90c89badc7af36322123d2003ae1bf7eec924680780eee7990f89dc3c05d2a1b. Jul 10 00:03:32.162226 kubelet[3298]: E0710 00:03:32.162172 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.162475 kubelet[3298]: W0710 00:03:32.162443 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.162591 kubelet[3298]: E0710 00:03:32.162567 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.175448 kubelet[3298]: E0710 00:03:32.175124 3298 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvfdh" podUID="63526049-3309-4f65-ad78-b95e459a7f01" Jul 10 00:03:32.193916 containerd[1990]: time="2025-07-10T00:03:32.193844539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-llk65,Uid:77f66d9e-adc9-4a54-a2b3-2f3c3f13555b,Namespace:calico-system,Attempt:0,}" Jul 10 00:03:32.250524 containerd[1990]: time="2025-07-10T00:03:32.249654235Z" level=info msg="connecting to shim ef411aa4a287d53acdf6c804318bb52634a22b15e362803f684d583587bca3df" address="unix:///run/containerd/s/58959c66e56ffde6ffefb43820bd66d7037a57ab2af652efacc32bcae41080e2" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:03:32.258047 kubelet[3298]: E0710 00:03:32.257993 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.258273 kubelet[3298]: W0710 00:03:32.258242 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.258461 kubelet[3298]: E0710 00:03:32.258437 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.260225 kubelet[3298]: E0710 00:03:32.260151 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.260539 kubelet[3298]: W0710 00:03:32.260185 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.260758 kubelet[3298]: E0710 00:03:32.260614 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.264363 kubelet[3298]: E0710 00:03:32.264132 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.264932 kubelet[3298]: W0710 00:03:32.264555 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.264932 kubelet[3298]: E0710 00:03:32.264601 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.266378 kubelet[3298]: E0710 00:03:32.266343 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.267305 kubelet[3298]: W0710 00:03:32.266615 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.267305 kubelet[3298]: E0710 00:03:32.266653 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.268158 kubelet[3298]: E0710 00:03:32.267998 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.268158 kubelet[3298]: W0710 00:03:32.268030 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.268158 kubelet[3298]: E0710 00:03:32.268060 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.269089 kubelet[3298]: E0710 00:03:32.268932 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.269089 kubelet[3298]: W0710 00:03:32.268966 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.269089 kubelet[3298]: E0710 00:03:32.268997 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.270255 kubelet[3298]: E0710 00:03:32.270078 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.270255 kubelet[3298]: W0710 00:03:32.270110 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.270255 kubelet[3298]: E0710 00:03:32.270140 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.271718 kubelet[3298]: E0710 00:03:32.271677 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.272040 kubelet[3298]: W0710 00:03:32.271879 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.272040 kubelet[3298]: E0710 00:03:32.271920 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.272634 kubelet[3298]: E0710 00:03:32.272541 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.272850 kubelet[3298]: W0710 00:03:32.272737 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.272850 kubelet[3298]: E0710 00:03:32.272773 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.274736 kubelet[3298]: E0710 00:03:32.274434 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.274736 kubelet[3298]: W0710 00:03:32.274486 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.274736 kubelet[3298]: E0710 00:03:32.274519 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.275713 kubelet[3298]: E0710 00:03:32.275683 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.276171 kubelet[3298]: W0710 00:03:32.275876 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.276171 kubelet[3298]: E0710 00:03:32.276043 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.277721 kubelet[3298]: E0710 00:03:32.277684 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.278157 kubelet[3298]: W0710 00:03:32.277963 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.278157 kubelet[3298]: E0710 00:03:32.278003 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.280270 kubelet[3298]: E0710 00:03:32.280234 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.280813 kubelet[3298]: W0710 00:03:32.280478 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.280813 kubelet[3298]: E0710 00:03:32.280517 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.284053 kubelet[3298]: E0710 00:03:32.283729 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.284053 kubelet[3298]: W0710 00:03:32.283764 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.284053 kubelet[3298]: E0710 00:03:32.283799 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.284764 kubelet[3298]: E0710 00:03:32.284422 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.284764 kubelet[3298]: W0710 00:03:32.284452 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.284764 kubelet[3298]: E0710 00:03:32.284480 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.285473 kubelet[3298]: E0710 00:03:32.285142 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.285473 kubelet[3298]: W0710 00:03:32.285174 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.285473 kubelet[3298]: E0710 00:03:32.285202 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.285993 kubelet[3298]: E0710 00:03:32.285965 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.286264 kubelet[3298]: W0710 00:03:32.286234 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.287535 kubelet[3298]: E0710 00:03:32.287461 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.288632 kubelet[3298]: E0710 00:03:32.288480 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.288632 kubelet[3298]: W0710 00:03:32.288513 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.288632 kubelet[3298]: E0710 00:03:32.288544 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.289874 kubelet[3298]: E0710 00:03:32.289610 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.289874 kubelet[3298]: W0710 00:03:32.289644 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.289874 kubelet[3298]: E0710 00:03:32.289675 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.290742 kubelet[3298]: E0710 00:03:32.290535 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.290742 kubelet[3298]: W0710 00:03:32.290664 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.290742 kubelet[3298]: E0710 00:03:32.290698 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.293219 kubelet[3298]: E0710 00:03:32.293160 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.293556 kubelet[3298]: W0710 00:03:32.293432 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.293740 kubelet[3298]: E0710 00:03:32.293688 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.294668 kubelet[3298]: I0710 00:03:32.294574 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/63526049-3309-4f65-ad78-b95e459a7f01-varrun\") pod \"csi-node-driver-cvfdh\" (UID: \"63526049-3309-4f65-ad78-b95e459a7f01\") " pod="calico-system/csi-node-driver-cvfdh" Jul 10 00:03:32.298461 kubelet[3298]: E0710 00:03:32.297089 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.298461 kubelet[3298]: W0710 00:03:32.297123 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.298461 kubelet[3298]: E0710 00:03:32.297164 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.299171 kubelet[3298]: E0710 00:03:32.298844 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.299171 kubelet[3298]: W0710 00:03:32.298875 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.299171 kubelet[3298]: E0710 00:03:32.298995 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.299496 kubelet[3298]: E0710 00:03:32.299475 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.299658 kubelet[3298]: W0710 00:03:32.299598 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.299658 kubelet[3298]: E0710 00:03:32.299629 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.300095 kubelet[3298]: I0710 00:03:32.299798 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/63526049-3309-4f65-ad78-b95e459a7f01-kubelet-dir\") pod \"csi-node-driver-cvfdh\" (UID: \"63526049-3309-4f65-ad78-b95e459a7f01\") " pod="calico-system/csi-node-driver-cvfdh" Jul 10 00:03:32.302142 kubelet[3298]: E0710 00:03:32.301604 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.302142 kubelet[3298]: W0710 00:03:32.301831 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.302142 kubelet[3298]: E0710 00:03:32.301890 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.304064 kubelet[3298]: E0710 00:03:32.303827 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.304064 kubelet[3298]: W0710 00:03:32.303880 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.304064 kubelet[3298]: E0710 00:03:32.304004 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.306829 kubelet[3298]: E0710 00:03:32.306795 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.307342 kubelet[3298]: W0710 00:03:32.306978 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.307342 kubelet[3298]: E0710 00:03:32.307022 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.307342 kubelet[3298]: I0710 00:03:32.307065 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/63526049-3309-4f65-ad78-b95e459a7f01-registration-dir\") pod \"csi-node-driver-cvfdh\" (UID: \"63526049-3309-4f65-ad78-b95e459a7f01\") " pod="calico-system/csi-node-driver-cvfdh" Jul 10 00:03:32.310003 kubelet[3298]: E0710 00:03:32.309581 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.310003 kubelet[3298]: W0710 00:03:32.309618 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.310003 kubelet[3298]: E0710 00:03:32.309651 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.310003 kubelet[3298]: I0710 00:03:32.309694 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq4v5\" (UniqueName: \"kubernetes.io/projected/63526049-3309-4f65-ad78-b95e459a7f01-kube-api-access-qq4v5\") pod \"csi-node-driver-cvfdh\" (UID: \"63526049-3309-4f65-ad78-b95e459a7f01\") " pod="calico-system/csi-node-driver-cvfdh" Jul 10 00:03:32.311424 kubelet[3298]: E0710 00:03:32.310428 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.311424 kubelet[3298]: W0710 00:03:32.310465 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.311424 kubelet[3298]: E0710 00:03:32.310495 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.311424 kubelet[3298]: I0710 00:03:32.310534 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/63526049-3309-4f65-ad78-b95e459a7f01-socket-dir\") pod \"csi-node-driver-cvfdh\" (UID: \"63526049-3309-4f65-ad78-b95e459a7f01\") " pod="calico-system/csi-node-driver-cvfdh" Jul 10 00:03:32.313437 kubelet[3298]: E0710 00:03:32.313258 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.313437 kubelet[3298]: W0710 00:03:32.313291 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.313437 kubelet[3298]: E0710 00:03:32.313329 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.315112 kubelet[3298]: E0710 00:03:32.315077 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.315843 kubelet[3298]: W0710 00:03:32.315568 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.315843 kubelet[3298]: E0710 00:03:32.315637 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.317380 kubelet[3298]: E0710 00:03:32.317331 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.318250 kubelet[3298]: W0710 00:03:32.318145 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.318685 kubelet[3298]: E0710 00:03:32.318549 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.319442 kubelet[3298]: E0710 00:03:32.319325 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.320688 kubelet[3298]: W0710 00:03:32.320577 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.321339 kubelet[3298]: E0710 00:03:32.320986 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.321839 kubelet[3298]: E0710 00:03:32.321800 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.322122 kubelet[3298]: W0710 00:03:32.322047 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.322122 kubelet[3298]: E0710 00:03:32.322089 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.323418 kubelet[3298]: E0710 00:03:32.323327 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.323418 kubelet[3298]: W0710 00:03:32.323360 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.323955 kubelet[3298]: E0710 00:03:32.323708 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.368693 systemd[1]: Started cri-containerd-ef411aa4a287d53acdf6c804318bb52634a22b15e362803f684d583587bca3df.scope - libcontainer container ef411aa4a287d53acdf6c804318bb52634a22b15e362803f684d583587bca3df. Jul 10 00:03:32.417427 kubelet[3298]: E0710 00:03:32.415092 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.417427 kubelet[3298]: W0710 00:03:32.415131 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.417427 kubelet[3298]: E0710 00:03:32.415164 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.425073 containerd[1990]: time="2025-07-10T00:03:32.424950920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67564c7b44-w4cjs,Uid:2cc2ff39-3e12-4549-8768-cc9b7c036b8e,Namespace:calico-system,Attempt:0,} returns sandbox id \"90c89badc7af36322123d2003ae1bf7eec924680780eee7990f89dc3c05d2a1b\"" Jul 10 00:03:32.432046 kubelet[3298]: E0710 00:03:32.431355 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.435933 kubelet[3298]: W0710 00:03:32.435886 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.436198 kubelet[3298]: E0710 00:03:32.436146 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.437460 kubelet[3298]: E0710 00:03:32.437412 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.438806 kubelet[3298]: W0710 00:03:32.438094 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.439134 kubelet[3298]: E0710 00:03:32.439099 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.440453 kubelet[3298]: E0710 00:03:32.440362 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.440453 kubelet[3298]: W0710 00:03:32.440415 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.440453 kubelet[3298]: E0710 00:03:32.440448 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.441589 kubelet[3298]: E0710 00:03:32.441537 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.441589 kubelet[3298]: W0710 00:03:32.441577 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.441796 kubelet[3298]: E0710 00:03:32.441611 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.444785 containerd[1990]: time="2025-07-10T00:03:32.444711524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 00:03:32.449015 kubelet[3298]: E0710 00:03:32.448943 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.449015 kubelet[3298]: W0710 00:03:32.449002 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.449284 kubelet[3298]: E0710 00:03:32.449037 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.451687 kubelet[3298]: E0710 00:03:32.451634 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.451687 kubelet[3298]: W0710 00:03:32.451676 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.451862 kubelet[3298]: E0710 00:03:32.451713 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.453799 kubelet[3298]: E0710 00:03:32.453749 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.453902 kubelet[3298]: W0710 00:03:32.453792 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.454508 kubelet[3298]: E0710 00:03:32.453911 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.457037 kubelet[3298]: E0710 00:03:32.456966 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.457037 kubelet[3298]: W0710 00:03:32.457015 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.457205 kubelet[3298]: E0710 00:03:32.457052 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.464612 kubelet[3298]: E0710 00:03:32.464524 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.464612 kubelet[3298]: W0710 00:03:32.464564 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.464612 kubelet[3298]: E0710 00:03:32.464603 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.465798 kubelet[3298]: E0710 00:03:32.465716 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.465798 kubelet[3298]: W0710 00:03:32.465759 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.465977 kubelet[3298]: E0710 00:03:32.465807 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.470569 kubelet[3298]: E0710 00:03:32.470498 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.470569 kubelet[3298]: W0710 00:03:32.470555 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.471903 kubelet[3298]: E0710 00:03:32.471842 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.471903 kubelet[3298]: W0710 00:03:32.471886 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.473796 kubelet[3298]: E0710 00:03:32.473320 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.473796 kubelet[3298]: W0710 00:03:32.473544 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.475948 kubelet[3298]: E0710 00:03:32.475108 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.476609 kubelet[3298]: W0710 00:03:32.475936 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.476609 kubelet[3298]: E0710 00:03:32.476008 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.477706 kubelet[3298]: E0710 00:03:32.477548 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.477706 kubelet[3298]: E0710 00:03:32.477646 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.477706 kubelet[3298]: E0710 00:03:32.477703 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.477903 kubelet[3298]: E0710 00:03:32.477795 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.477903 kubelet[3298]: W0710 00:03:32.477814 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.477903 kubelet[3298]: E0710 00:03:32.477854 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.479770 kubelet[3298]: E0710 00:03:32.479704 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.479770 kubelet[3298]: W0710 00:03:32.479747 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.480999 kubelet[3298]: E0710 00:03:32.479785 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.482477 kubelet[3298]: E0710 00:03:32.482262 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.482477 kubelet[3298]: W0710 00:03:32.482462 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.482650 kubelet[3298]: E0710 00:03:32.482499 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.483525 kubelet[3298]: E0710 00:03:32.483323 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.483525 kubelet[3298]: W0710 00:03:32.483364 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.483699 kubelet[3298]: E0710 00:03:32.483579 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.485668 kubelet[3298]: E0710 00:03:32.485599 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.485668 kubelet[3298]: W0710 00:03:32.485640 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.486139 kubelet[3298]: E0710 00:03:32.485822 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.486139 kubelet[3298]: E0710 00:03:32.486124 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.486291 kubelet[3298]: W0710 00:03:32.486144 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.486291 kubelet[3298]: E0710 00:03:32.486174 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.487159 kubelet[3298]: E0710 00:03:32.486702 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.487159 kubelet[3298]: W0710 00:03:32.486722 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.487159 kubelet[3298]: E0710 00:03:32.486888 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.488091 kubelet[3298]: E0710 00:03:32.487948 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.488091 kubelet[3298]: W0710 00:03:32.487990 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.488091 kubelet[3298]: E0710 00:03:32.488024 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.488886 kubelet[3298]: E0710 00:03:32.488683 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.488886 kubelet[3298]: W0710 00:03:32.488719 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.488886 kubelet[3298]: E0710 00:03:32.488858 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.489756 kubelet[3298]: E0710 00:03:32.489509 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.489756 kubelet[3298]: W0710 00:03:32.489547 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.489756 kubelet[3298]: E0710 00:03:32.489581 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.534357 kubelet[3298]: E0710 00:03:32.534303 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:32.534357 kubelet[3298]: W0710 00:03:32.534342 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:32.534618 kubelet[3298]: E0710 00:03:32.534374 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:32.619375 containerd[1990]: time="2025-07-10T00:03:32.619217901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-llk65,Uid:77f66d9e-adc9-4a54-a2b3-2f3c3f13555b,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef411aa4a287d53acdf6c804318bb52634a22b15e362803f684d583587bca3df\"" Jul 10 00:03:33.879145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2669406524.mount: Deactivated successfully. Jul 10 00:03:34.570893 kubelet[3298]: E0710 00:03:34.570836 3298 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvfdh" podUID="63526049-3309-4f65-ad78-b95e459a7f01" Jul 10 00:03:34.880295 containerd[1990]: time="2025-07-10T00:03:34.879606228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:34.881084 containerd[1990]: time="2025-07-10T00:03:34.880834416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 10 00:03:34.882842 containerd[1990]: time="2025-07-10T00:03:34.882780804Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:34.887608 containerd[1990]: time="2025-07-10T00:03:34.887526469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:34.889036 containerd[1990]: time="2025-07-10T00:03:34.888716713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.443943017s" Jul 10 00:03:34.889036 containerd[1990]: time="2025-07-10T00:03:34.888769873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 10 00:03:34.892049 containerd[1990]: time="2025-07-10T00:03:34.891640873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 00:03:34.906560 containerd[1990]: time="2025-07-10T00:03:34.906514153Z" level=info msg="CreateContainer within sandbox \"90c89badc7af36322123d2003ae1bf7eec924680780eee7990f89dc3c05d2a1b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 00:03:34.930414 containerd[1990]: time="2025-07-10T00:03:34.926723041Z" level=info msg="Container 090e63fe123194a6fcf1690674a8dfbd336ce275a81f07cb01122e3ecd47c9bf: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:03:34.943577 containerd[1990]: time="2025-07-10T00:03:34.943527661Z" level=info msg="CreateContainer within sandbox \"90c89badc7af36322123d2003ae1bf7eec924680780eee7990f89dc3c05d2a1b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"090e63fe123194a6fcf1690674a8dfbd336ce275a81f07cb01122e3ecd47c9bf\"" Jul 10 00:03:34.945498 containerd[1990]: time="2025-07-10T00:03:34.945450337Z" level=info msg="StartContainer for \"090e63fe123194a6fcf1690674a8dfbd336ce275a81f07cb01122e3ecd47c9bf\"" Jul 10 00:03:34.948285 containerd[1990]: time="2025-07-10T00:03:34.948234577Z" level=info msg="connecting to shim 090e63fe123194a6fcf1690674a8dfbd336ce275a81f07cb01122e3ecd47c9bf" address="unix:///run/containerd/s/0c4d71b8aba32523af38228e060e751695a558a5eb772314cf0c9cbc11a7beaf" protocol=ttrpc version=3 Jul 10 00:03:34.995707 systemd[1]: Started cri-containerd-090e63fe123194a6fcf1690674a8dfbd336ce275a81f07cb01122e3ecd47c9bf.scope - libcontainer container 090e63fe123194a6fcf1690674a8dfbd336ce275a81f07cb01122e3ecd47c9bf. Jul 10 00:03:35.074719 containerd[1990]: time="2025-07-10T00:03:35.074660409Z" level=info msg="StartContainer for \"090e63fe123194a6fcf1690674a8dfbd336ce275a81f07cb01122e3ecd47c9bf\" returns successfully" Jul 10 00:03:35.924642 kubelet[3298]: E0710 00:03:35.924348 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.924642 kubelet[3298]: W0710 00:03:35.924414 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.924642 kubelet[3298]: E0710 00:03:35.924452 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.925418 kubelet[3298]: E0710 00:03:35.925368 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.925872 kubelet[3298]: W0710 00:03:35.925570 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.925872 kubelet[3298]: E0710 00:03:35.925731 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.926673 kubelet[3298]: E0710 00:03:35.926371 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.926673 kubelet[3298]: W0710 00:03:35.926458 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.926673 kubelet[3298]: E0710 00:03:35.926488 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.927021 kubelet[3298]: E0710 00:03:35.926996 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.927350 kubelet[3298]: W0710 00:03:35.927308 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.927712 kubelet[3298]: E0710 00:03:35.927489 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.927937 kubelet[3298]: E0710 00:03:35.927913 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.928040 kubelet[3298]: W0710 00:03:35.928016 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.928437 kubelet[3298]: E0710 00:03:35.928160 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.929325 kubelet[3298]: E0710 00:03:35.929072 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.929325 kubelet[3298]: W0710 00:03:35.929106 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.929325 kubelet[3298]: E0710 00:03:35.929138 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.930226 kubelet[3298]: E0710 00:03:35.929866 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.930447 kubelet[3298]: W0710 00:03:35.930413 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.930569 kubelet[3298]: E0710 00:03:35.930539 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.932033 kubelet[3298]: E0710 00:03:35.931052 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.932033 kubelet[3298]: W0710 00:03:35.931080 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.932033 kubelet[3298]: E0710 00:03:35.931106 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.932563 kubelet[3298]: E0710 00:03:35.932531 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.933015 kubelet[3298]: W0710 00:03:35.932682 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.933245 kubelet[3298]: E0710 00:03:35.933215 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.934080 kubelet[3298]: E0710 00:03:35.933779 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.935033 kubelet[3298]: W0710 00:03:35.934344 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.935033 kubelet[3298]: E0710 00:03:35.934426 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.935524 kubelet[3298]: E0710 00:03:35.935494 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.935669 kubelet[3298]: W0710 00:03:35.935641 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.935863 kubelet[3298]: E0710 00:03:35.935789 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.937261 kubelet[3298]: E0710 00:03:35.936991 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.937261 kubelet[3298]: W0710 00:03:35.937025 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.937261 kubelet[3298]: E0710 00:03:35.937056 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.937659 kubelet[3298]: E0710 00:03:35.937636 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.938254 kubelet[3298]: W0710 00:03:35.938204 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.938493 kubelet[3298]: E0710 00:03:35.938465 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.939039 kubelet[3298]: E0710 00:03:35.938991 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.940263 kubelet[3298]: W0710 00:03:35.939982 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.940263 kubelet[3298]: E0710 00:03:35.940040 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.940573 kubelet[3298]: E0710 00:03:35.940549 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.940702 kubelet[3298]: W0710 00:03:35.940678 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.940827 kubelet[3298]: E0710 00:03:35.940803 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.971914 kubelet[3298]: E0710 00:03:35.971859 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.971914 kubelet[3298]: W0710 00:03:35.971900 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.972140 kubelet[3298]: E0710 00:03:35.971935 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.972765 kubelet[3298]: E0710 00:03:35.972718 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.972765 kubelet[3298]: W0710 00:03:35.972754 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.972947 kubelet[3298]: E0710 00:03:35.972812 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.973356 kubelet[3298]: E0710 00:03:35.973290 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.973356 kubelet[3298]: W0710 00:03:35.973328 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.973356 kubelet[3298]: E0710 00:03:35.973357 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.975720 kubelet[3298]: E0710 00:03:35.975678 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.976110 kubelet[3298]: W0710 00:03:35.975856 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.976110 kubelet[3298]: E0710 00:03:35.975916 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.976823 kubelet[3298]: E0710 00:03:35.976791 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.977147 kubelet[3298]: W0710 00:03:35.976974 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.977212 kubelet[3298]: E0710 00:03:35.977165 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.977830 kubelet[3298]: E0710 00:03:35.977604 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.977830 kubelet[3298]: W0710 00:03:35.977632 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.977830 kubelet[3298]: E0710 00:03:35.977695 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.978159 kubelet[3298]: E0710 00:03:35.978136 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.978441 kubelet[3298]: W0710 00:03:35.978240 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.978441 kubelet[3298]: E0710 00:03:35.978325 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.978889 kubelet[3298]: E0710 00:03:35.978852 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.979108 kubelet[3298]: W0710 00:03:35.978995 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.979495 kubelet[3298]: E0710 00:03:35.979444 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.979495 kubelet[3298]: E0710 00:03:35.979464 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.979822 kubelet[3298]: W0710 00:03:35.979507 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.979822 kubelet[3298]: E0710 00:03:35.979568 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.980054 kubelet[3298]: E0710 00:03:35.980023 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.980142 kubelet[3298]: W0710 00:03:35.980053 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.980273 kubelet[3298]: E0710 00:03:35.980182 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.981379 kubelet[3298]: E0710 00:03:35.981301 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.981379 kubelet[3298]: W0710 00:03:35.981367 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.981850 kubelet[3298]: E0710 00:03:35.981461 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.983045 kubelet[3298]: E0710 00:03:35.982992 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.983045 kubelet[3298]: W0710 00:03:35.983033 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.983743 kubelet[3298]: E0710 00:03:35.983128 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.985537 kubelet[3298]: E0710 00:03:35.984652 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.985537 kubelet[3298]: W0710 00:03:35.984697 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.985537 kubelet[3298]: E0710 00:03:35.985158 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.985916 kubelet[3298]: E0710 00:03:35.985873 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.985916 kubelet[3298]: W0710 00:03:35.985899 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.986008 kubelet[3298]: E0710 00:03:35.985929 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.987245 kubelet[3298]: E0710 00:03:35.987195 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.987245 kubelet[3298]: W0710 00:03:35.987233 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.988471 kubelet[3298]: E0710 00:03:35.987279 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.988937 kubelet[3298]: E0710 00:03:35.988895 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.989226 kubelet[3298]: W0710 00:03:35.988937 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.989570 kubelet[3298]: E0710 00:03:35.989299 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.991632 kubelet[3298]: E0710 00:03:35.991579 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.991632 kubelet[3298]: W0710 00:03:35.991620 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.992660 kubelet[3298]: E0710 00:03:35.991891 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:35.992660 kubelet[3298]: E0710 00:03:35.992049 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:03:35.992660 kubelet[3298]: W0710 00:03:35.992102 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:03:35.992660 kubelet[3298]: E0710 00:03:35.992130 3298 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:03:36.566290 kubelet[3298]: E0710 00:03:36.565753 3298 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvfdh" podUID="63526049-3309-4f65-ad78-b95e459a7f01" Jul 10 00:03:36.617577 containerd[1990]: time="2025-07-10T00:03:36.617499061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:36.619099 containerd[1990]: time="2025-07-10T00:03:36.619020829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 10 00:03:36.620439 containerd[1990]: time="2025-07-10T00:03:36.620342581Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:36.623500 containerd[1990]: time="2025-07-10T00:03:36.623343853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:36.625685 containerd[1990]: time="2025-07-10T00:03:36.625616317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.733917208s" Jul 10 00:03:36.625685 containerd[1990]: time="2025-07-10T00:03:36.625675489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 10 00:03:36.631109 containerd[1990]: time="2025-07-10T00:03:36.630904141Z" level=info msg="CreateContainer within sandbox \"ef411aa4a287d53acdf6c804318bb52634a22b15e362803f684d583587bca3df\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 00:03:36.647275 containerd[1990]: time="2025-07-10T00:03:36.647204737Z" level=info msg="Container 51ed697bec316d6c9ad969b6c41d9e2642a8e916bccf8f4f2bb25bf27fe8360b: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:03:36.664738 containerd[1990]: time="2025-07-10T00:03:36.664681009Z" level=info msg="CreateContainer within sandbox \"ef411aa4a287d53acdf6c804318bb52634a22b15e362803f684d583587bca3df\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"51ed697bec316d6c9ad969b6c41d9e2642a8e916bccf8f4f2bb25bf27fe8360b\"" Jul 10 00:03:36.665835 containerd[1990]: time="2025-07-10T00:03:36.665719825Z" level=info msg="StartContainer for \"51ed697bec316d6c9ad969b6c41d9e2642a8e916bccf8f4f2bb25bf27fe8360b\"" Jul 10 00:03:36.669424 containerd[1990]: time="2025-07-10T00:03:36.669312829Z" level=info msg="connecting to shim 51ed697bec316d6c9ad969b6c41d9e2642a8e916bccf8f4f2bb25bf27fe8360b" address="unix:///run/containerd/s/58959c66e56ffde6ffefb43820bd66d7037a57ab2af652efacc32bcae41080e2" protocol=ttrpc version=3 Jul 10 00:03:36.716082 systemd[1]: Started cri-containerd-51ed697bec316d6c9ad969b6c41d9e2642a8e916bccf8f4f2bb25bf27fe8360b.scope - libcontainer container 51ed697bec316d6c9ad969b6c41d9e2642a8e916bccf8f4f2bb25bf27fe8360b. Jul 10 00:03:36.802016 containerd[1990]: time="2025-07-10T00:03:36.801969254Z" level=info msg="StartContainer for \"51ed697bec316d6c9ad969b6c41d9e2642a8e916bccf8f4f2bb25bf27fe8360b\" returns successfully" Jul 10 00:03:36.832135 systemd[1]: cri-containerd-51ed697bec316d6c9ad969b6c41d9e2642a8e916bccf8f4f2bb25bf27fe8360b.scope: Deactivated successfully. Jul 10 00:03:36.839461 containerd[1990]: time="2025-07-10T00:03:36.839352614Z" level=info msg="received exit event container_id:\"51ed697bec316d6c9ad969b6c41d9e2642a8e916bccf8f4f2bb25bf27fe8360b\" id:\"51ed697bec316d6c9ad969b6c41d9e2642a8e916bccf8f4f2bb25bf27fe8360b\" pid:4151 exited_at:{seconds:1752105816 nanos:838635974}" Jul 10 00:03:36.839901 containerd[1990]: time="2025-07-10T00:03:36.839691866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"51ed697bec316d6c9ad969b6c41d9e2642a8e916bccf8f4f2bb25bf27fe8360b\" id:\"51ed697bec316d6c9ad969b6c41d9e2642a8e916bccf8f4f2bb25bf27fe8360b\" pid:4151 exited_at:{seconds:1752105816 nanos:838635974}" Jul 10 00:03:36.898834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51ed697bec316d6c9ad969b6c41d9e2642a8e916bccf8f4f2bb25bf27fe8360b-rootfs.mount: Deactivated successfully. Jul 10 00:03:36.911174 kubelet[3298]: I0710 00:03:36.911088 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67564c7b44-w4cjs" podStartSLOduration=3.46478371 podStartE2EDuration="5.911064339s" podCreationTimestamp="2025-07-10 00:03:31 +0000 UTC" firstStartedPulling="2025-07-10 00:03:32.444023084 +0000 UTC m=+32.093070784" lastFinishedPulling="2025-07-10 00:03:34.890303701 +0000 UTC m=+34.539351413" observedRunningTime="2025-07-10 00:03:35.896233562 +0000 UTC m=+35.545281298" watchObservedRunningTime="2025-07-10 00:03:36.911064339 +0000 UTC m=+36.560112039" Jul 10 00:03:37.879362 containerd[1990]: time="2025-07-10T00:03:37.879305811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 00:03:38.565213 kubelet[3298]: E0710 00:03:38.563861 3298 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvfdh" podUID="63526049-3309-4f65-ad78-b95e459a7f01" Jul 10 00:03:40.565564 kubelet[3298]: E0710 00:03:40.564714 3298 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvfdh" podUID="63526049-3309-4f65-ad78-b95e459a7f01" Jul 10 00:03:41.530362 containerd[1990]: time="2025-07-10T00:03:41.530278253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:41.533135 containerd[1990]: time="2025-07-10T00:03:41.533059074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 10 00:03:41.535586 containerd[1990]: time="2025-07-10T00:03:41.535489494Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:41.540131 containerd[1990]: time="2025-07-10T00:03:41.540051198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:41.541414 containerd[1990]: time="2025-07-10T00:03:41.541281570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.661566931s" Jul 10 00:03:41.541414 containerd[1990]: time="2025-07-10T00:03:41.541336590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 10 00:03:41.547880 containerd[1990]: time="2025-07-10T00:03:41.547787202Z" level=info msg="CreateContainer within sandbox \"ef411aa4a287d53acdf6c804318bb52634a22b15e362803f684d583587bca3df\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 00:03:41.571419 containerd[1990]: time="2025-07-10T00:03:41.568629810Z" level=info msg="Container aff484ab79f0222e0453be2ae0656d3351829d1cd3d9bffea1141b752ecf02ef: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:03:41.590438 containerd[1990]: time="2025-07-10T00:03:41.590360574Z" level=info msg="CreateContainer within sandbox \"ef411aa4a287d53acdf6c804318bb52634a22b15e362803f684d583587bca3df\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"aff484ab79f0222e0453be2ae0656d3351829d1cd3d9bffea1141b752ecf02ef\"" Jul 10 00:03:41.591635 containerd[1990]: time="2025-07-10T00:03:41.591593358Z" level=info msg="StartContainer for \"aff484ab79f0222e0453be2ae0656d3351829d1cd3d9bffea1141b752ecf02ef\"" Jul 10 00:03:41.595049 containerd[1990]: time="2025-07-10T00:03:41.594996018Z" level=info msg="connecting to shim aff484ab79f0222e0453be2ae0656d3351829d1cd3d9bffea1141b752ecf02ef" address="unix:///run/containerd/s/58959c66e56ffde6ffefb43820bd66d7037a57ab2af652efacc32bcae41080e2" protocol=ttrpc version=3 Jul 10 00:03:41.637751 systemd[1]: Started cri-containerd-aff484ab79f0222e0453be2ae0656d3351829d1cd3d9bffea1141b752ecf02ef.scope - libcontainer container aff484ab79f0222e0453be2ae0656d3351829d1cd3d9bffea1141b752ecf02ef. Jul 10 00:03:41.727117 containerd[1990]: time="2025-07-10T00:03:41.727040694Z" level=info msg="StartContainer for \"aff484ab79f0222e0453be2ae0656d3351829d1cd3d9bffea1141b752ecf02ef\" returns successfully" Jul 10 00:03:42.563336 kubelet[3298]: E0710 00:03:42.563259 3298 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvfdh" podUID="63526049-3309-4f65-ad78-b95e459a7f01" Jul 10 00:03:42.707245 containerd[1990]: time="2025-07-10T00:03:42.707184091Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:03:42.711576 systemd[1]: cri-containerd-aff484ab79f0222e0453be2ae0656d3351829d1cd3d9bffea1141b752ecf02ef.scope: Deactivated successfully. Jul 10 00:03:42.712280 systemd[1]: cri-containerd-aff484ab79f0222e0453be2ae0656d3351829d1cd3d9bffea1141b752ecf02ef.scope: Consumed 932ms CPU time, 185.9M memory peak, 165.8M written to disk. Jul 10 00:03:42.715857 containerd[1990]: time="2025-07-10T00:03:42.715790551Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aff484ab79f0222e0453be2ae0656d3351829d1cd3d9bffea1141b752ecf02ef\" id:\"aff484ab79f0222e0453be2ae0656d3351829d1cd3d9bffea1141b752ecf02ef\" pid:4216 exited_at:{seconds:1752105822 nanos:715219327}" Jul 10 00:03:42.715857 containerd[1990]: time="2025-07-10T00:03:42.715817011Z" level=info msg="received exit event container_id:\"aff484ab79f0222e0453be2ae0656d3351829d1cd3d9bffea1141b752ecf02ef\" id:\"aff484ab79f0222e0453be2ae0656d3351829d1cd3d9bffea1141b752ecf02ef\" pid:4216 exited_at:{seconds:1752105822 nanos:715219327}" Jul 10 00:03:42.721001 kubelet[3298]: I0710 00:03:42.720481 3298 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:03:42.818734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aff484ab79f0222e0453be2ae0656d3351829d1cd3d9bffea1141b752ecf02ef-rootfs.mount: Deactivated successfully. Jul 10 00:03:42.827009 systemd[1]: Created slice kubepods-burstable-podb8225626_c244_4702_a672_ad853272263e.slice - libcontainer container kubepods-burstable-podb8225626_c244_4702_a672_ad853272263e.slice. Jul 10 00:03:42.830576 kubelet[3298]: I0710 00:03:42.830499 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj6kc\" (UniqueName: \"kubernetes.io/projected/b8225626-c244-4702-a672-ad853272263e-kube-api-access-pj6kc\") pod \"coredns-668d6bf9bc-hwnm7\" (UID: \"b8225626-c244-4702-a672-ad853272263e\") " pod="kube-system/coredns-668d6bf9bc-hwnm7" Jul 10 00:03:42.830718 kubelet[3298]: I0710 00:03:42.830584 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36ad65bb-9edb-4db0-9097-ab8516085854-config-volume\") pod \"coredns-668d6bf9bc-rs7g9\" (UID: \"36ad65bb-9edb-4db0-9097-ab8516085854\") " pod="kube-system/coredns-668d6bf9bc-rs7g9" Jul 10 00:03:42.830718 kubelet[3298]: I0710 00:03:42.830625 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h87jn\" (UniqueName: \"kubernetes.io/projected/36ad65bb-9edb-4db0-9097-ab8516085854-kube-api-access-h87jn\") pod \"coredns-668d6bf9bc-rs7g9\" (UID: \"36ad65bb-9edb-4db0-9097-ab8516085854\") " pod="kube-system/coredns-668d6bf9bc-rs7g9" Jul 10 00:03:42.830718 kubelet[3298]: I0710 00:03:42.830663 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8225626-c244-4702-a672-ad853272263e-config-volume\") pod \"coredns-668d6bf9bc-hwnm7\" (UID: \"b8225626-c244-4702-a672-ad853272263e\") " pod="kube-system/coredns-668d6bf9bc-hwnm7" Jul 10 00:03:42.857129 systemd[1]: Created slice kubepods-burstable-pod36ad65bb_9edb_4db0_9097_ab8516085854.slice - libcontainer container kubepods-burstable-pod36ad65bb_9edb_4db0_9097_ab8516085854.slice. Jul 10 00:03:42.890785 systemd[1]: Created slice kubepods-besteffort-pod6c47ff3b_ef7c_4494_b399_5a6a62047af4.slice - libcontainer container kubepods-besteffort-pod6c47ff3b_ef7c_4494_b399_5a6a62047af4.slice. Jul 10 00:03:42.920448 systemd[1]: Created slice kubepods-besteffort-pod17db8b91_9713_4ad3_8e2a_e7a1b996f01d.slice - libcontainer container kubepods-besteffort-pod17db8b91_9713_4ad3_8e2a_e7a1b996f01d.slice. Jul 10 00:03:42.930950 kubelet[3298]: I0710 00:03:42.930886 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bc7c95fb-936f-48c1-97af-af78b053d354-calico-apiserver-certs\") pod \"calico-apiserver-94958988c-ktf4v\" (UID: \"bc7c95fb-936f-48c1-97af-af78b053d354\") " pod="calico-apiserver/calico-apiserver-94958988c-ktf4v" Jul 10 00:03:42.930950 kubelet[3298]: I0710 00:03:42.930956 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96d8309c-c796-4514-84c5-6e5f9f3dca37-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-hvkcr\" (UID: \"96d8309c-c796-4514-84c5-6e5f9f3dca37\") " pod="calico-system/goldmane-768f4c5c69-hvkcr" Jul 10 00:03:42.931229 kubelet[3298]: I0710 00:03:42.930995 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mrvb\" (UniqueName: \"kubernetes.io/projected/96d8309c-c796-4514-84c5-6e5f9f3dca37-kube-api-access-8mrvb\") pod \"goldmane-768f4c5c69-hvkcr\" (UID: \"96d8309c-c796-4514-84c5-6e5f9f3dca37\") " pod="calico-system/goldmane-768f4c5c69-hvkcr" Jul 10 00:03:42.931229 kubelet[3298]: I0710 00:03:42.931040 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlt5d\" (UniqueName: \"kubernetes.io/projected/4fe184b3-b13a-4c89-bc66-4b307fc7f633-kube-api-access-xlt5d\") pod \"whisker-86947c6954-mcnnk\" (UID: \"4fe184b3-b13a-4c89-bc66-4b307fc7f633\") " pod="calico-system/whisker-86947c6954-mcnnk" Jul 10 00:03:42.931229 kubelet[3298]: I0710 00:03:42.931100 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c47ff3b-ef7c-4494-b399-5a6a62047af4-tigera-ca-bundle\") pod \"calico-kube-controllers-7f9944db5-x5s7z\" (UID: \"6c47ff3b-ef7c-4494-b399-5a6a62047af4\") " pod="calico-system/calico-kube-controllers-7f9944db5-x5s7z" Jul 10 00:03:42.931229 kubelet[3298]: I0710 00:03:42.931144 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fe184b3-b13a-4c89-bc66-4b307fc7f633-whisker-ca-bundle\") pod \"whisker-86947c6954-mcnnk\" (UID: \"4fe184b3-b13a-4c89-bc66-4b307fc7f633\") " pod="calico-system/whisker-86947c6954-mcnnk" Jul 10 00:03:42.931229 kubelet[3298]: I0710 00:03:42.931204 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/17db8b91-9713-4ad3-8e2a-e7a1b996f01d-calico-apiserver-certs\") pod \"calico-apiserver-94958988c-c7snx\" (UID: \"17db8b91-9713-4ad3-8e2a-e7a1b996f01d\") " pod="calico-apiserver/calico-apiserver-94958988c-c7snx" Jul 10 00:03:42.935157 kubelet[3298]: I0710 00:03:42.931245 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5w78\" (UniqueName: \"kubernetes.io/projected/17db8b91-9713-4ad3-8e2a-e7a1b996f01d-kube-api-access-s5w78\") pod \"calico-apiserver-94958988c-c7snx\" (UID: \"17db8b91-9713-4ad3-8e2a-e7a1b996f01d\") " pod="calico-apiserver/calico-apiserver-94958988c-c7snx" Jul 10 00:03:42.935157 kubelet[3298]: I0710 00:03:42.931282 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d8309c-c796-4514-84c5-6e5f9f3dca37-config\") pod \"goldmane-768f4c5c69-hvkcr\" (UID: \"96d8309c-c796-4514-84c5-6e5f9f3dca37\") " pod="calico-system/goldmane-768f4c5c69-hvkcr" Jul 10 00:03:42.935157 kubelet[3298]: I0710 00:03:42.931318 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdllb\" (UniqueName: \"kubernetes.io/projected/6c47ff3b-ef7c-4494-b399-5a6a62047af4-kube-api-access-kdllb\") pod \"calico-kube-controllers-7f9944db5-x5s7z\" (UID: \"6c47ff3b-ef7c-4494-b399-5a6a62047af4\") " pod="calico-system/calico-kube-controllers-7f9944db5-x5s7z" Jul 10 00:03:42.935157 kubelet[3298]: I0710 00:03:42.931431 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmjp8\" (UniqueName: \"kubernetes.io/projected/bc7c95fb-936f-48c1-97af-af78b053d354-kube-api-access-pmjp8\") pod \"calico-apiserver-94958988c-ktf4v\" (UID: \"bc7c95fb-936f-48c1-97af-af78b053d354\") " pod="calico-apiserver/calico-apiserver-94958988c-ktf4v" Jul 10 00:03:42.935157 kubelet[3298]: I0710 00:03:42.931485 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/96d8309c-c796-4514-84c5-6e5f9f3dca37-goldmane-key-pair\") pod \"goldmane-768f4c5c69-hvkcr\" (UID: \"96d8309c-c796-4514-84c5-6e5f9f3dca37\") " pod="calico-system/goldmane-768f4c5c69-hvkcr" Jul 10 00:03:42.935475 kubelet[3298]: I0710 00:03:42.931524 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4fe184b3-b13a-4c89-bc66-4b307fc7f633-whisker-backend-key-pair\") pod \"whisker-86947c6954-mcnnk\" (UID: \"4fe184b3-b13a-4c89-bc66-4b307fc7f633\") " pod="calico-system/whisker-86947c6954-mcnnk" Jul 10 00:03:42.944921 systemd[1]: Created slice kubepods-besteffort-podbc7c95fb_936f_48c1_97af_af78b053d354.slice - libcontainer container kubepods-besteffort-podbc7c95fb_936f_48c1_97af_af78b053d354.slice. Jul 10 00:03:42.975372 systemd[1]: Created slice kubepods-besteffort-pod4fe184b3_b13a_4c89_bc66_4b307fc7f633.slice - libcontainer container kubepods-besteffort-pod4fe184b3_b13a_4c89_bc66_4b307fc7f633.slice. Jul 10 00:03:43.026720 systemd[1]: Created slice kubepods-besteffort-pod96d8309c_c796_4514_84c5_6e5f9f3dca37.slice - libcontainer container kubepods-besteffort-pod96d8309c_c796_4514_84c5_6e5f9f3dca37.slice. Jul 10 00:03:43.147936 containerd[1990]: time="2025-07-10T00:03:43.146913438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hwnm7,Uid:b8225626-c244-4702-a672-ad853272263e,Namespace:kube-system,Attempt:0,}" Jul 10 00:03:43.177525 containerd[1990]: time="2025-07-10T00:03:43.177194814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rs7g9,Uid:36ad65bb-9edb-4db0-9097-ab8516085854,Namespace:kube-system,Attempt:0,}" Jul 10 00:03:43.203941 containerd[1990]: time="2025-07-10T00:03:43.203777970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9944db5-x5s7z,Uid:6c47ff3b-ef7c-4494-b399-5a6a62047af4,Namespace:calico-system,Attempt:0,}" Jul 10 00:03:43.246356 containerd[1990]: time="2025-07-10T00:03:43.246140430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94958988c-c7snx,Uid:17db8b91-9713-4ad3-8e2a-e7a1b996f01d,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:03:43.254888 containerd[1990]: time="2025-07-10T00:03:43.254680614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94958988c-ktf4v,Uid:bc7c95fb-936f-48c1-97af-af78b053d354,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:03:43.321617 containerd[1990]: time="2025-07-10T00:03:43.321421398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86947c6954-mcnnk,Uid:4fe184b3-b13a-4c89-bc66-4b307fc7f633,Namespace:calico-system,Attempt:0,}" Jul 10 00:03:43.333808 containerd[1990]: time="2025-07-10T00:03:43.333623070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-hvkcr,Uid:96d8309c-c796-4514-84c5-6e5f9f3dca37,Namespace:calico-system,Attempt:0,}" Jul 10 00:03:43.680206 containerd[1990]: time="2025-07-10T00:03:43.680071316Z" level=error msg="Failed to destroy network for sandbox \"2e1b7dc5c2b3196c3017a84bdd63b23071caa4875db956ee461fd6ebf53fb227\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.688704 containerd[1990]: time="2025-07-10T00:03:43.688606268Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hwnm7,Uid:b8225626-c244-4702-a672-ad853272263e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e1b7dc5c2b3196c3017a84bdd63b23071caa4875db956ee461fd6ebf53fb227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.689753 kubelet[3298]: E0710 00:03:43.688936 3298 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e1b7dc5c2b3196c3017a84bdd63b23071caa4875db956ee461fd6ebf53fb227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.689753 kubelet[3298]: E0710 00:03:43.689033 3298 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e1b7dc5c2b3196c3017a84bdd63b23071caa4875db956ee461fd6ebf53fb227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hwnm7" Jul 10 00:03:43.689753 kubelet[3298]: E0710 00:03:43.689067 3298 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e1b7dc5c2b3196c3017a84bdd63b23071caa4875db956ee461fd6ebf53fb227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hwnm7" Jul 10 00:03:43.690468 kubelet[3298]: E0710 00:03:43.689146 3298 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hwnm7_kube-system(b8225626-c244-4702-a672-ad853272263e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hwnm7_kube-system(b8225626-c244-4702-a672-ad853272263e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e1b7dc5c2b3196c3017a84bdd63b23071caa4875db956ee461fd6ebf53fb227\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hwnm7" podUID="b8225626-c244-4702-a672-ad853272263e" Jul 10 00:03:43.728599 containerd[1990]: time="2025-07-10T00:03:43.728531156Z" level=error msg="Failed to destroy network for sandbox \"d1da89de0e61a500260b1a735320423da67889825184da3838be171cc5e8df3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.733855 containerd[1990]: time="2025-07-10T00:03:43.733465640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9944db5-x5s7z,Uid:6c47ff3b-ef7c-4494-b399-5a6a62047af4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1da89de0e61a500260b1a735320423da67889825184da3838be171cc5e8df3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.735190 kubelet[3298]: E0710 00:03:43.735110 3298 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1da89de0e61a500260b1a735320423da67889825184da3838be171cc5e8df3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.735498 kubelet[3298]: E0710 00:03:43.735203 3298 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1da89de0e61a500260b1a735320423da67889825184da3838be171cc5e8df3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9944db5-x5s7z" Jul 10 00:03:43.735498 kubelet[3298]: E0710 00:03:43.735241 3298 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1da89de0e61a500260b1a735320423da67889825184da3838be171cc5e8df3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9944db5-x5s7z" Jul 10 00:03:43.735498 kubelet[3298]: E0710 00:03:43.735321 3298 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9944db5-x5s7z_calico-system(6c47ff3b-ef7c-4494-b399-5a6a62047af4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9944db5-x5s7z_calico-system(6c47ff3b-ef7c-4494-b399-5a6a62047af4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1da89de0e61a500260b1a735320423da67889825184da3838be171cc5e8df3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9944db5-x5s7z" podUID="6c47ff3b-ef7c-4494-b399-5a6a62047af4" Jul 10 00:03:43.770074 containerd[1990]: time="2025-07-10T00:03:43.770015157Z" level=error msg="Failed to destroy network for sandbox \"4ac67edd8f81be905087d470f4a837ea7333b9be0954223f9419a6cf682d2da7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.775690 containerd[1990]: time="2025-07-10T00:03:43.775618101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94958988c-c7snx,Uid:17db8b91-9713-4ad3-8e2a-e7a1b996f01d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ac67edd8f81be905087d470f4a837ea7333b9be0954223f9419a6cf682d2da7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.776882 kubelet[3298]: E0710 00:03:43.776801 3298 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ac67edd8f81be905087d470f4a837ea7333b9be0954223f9419a6cf682d2da7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.777056 kubelet[3298]: E0710 00:03:43.776890 3298 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ac67edd8f81be905087d470f4a837ea7333b9be0954223f9419a6cf682d2da7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94958988c-c7snx" Jul 10 00:03:43.777056 kubelet[3298]: E0710 00:03:43.776928 3298 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ac67edd8f81be905087d470f4a837ea7333b9be0954223f9419a6cf682d2da7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94958988c-c7snx" Jul 10 00:03:43.777056 kubelet[3298]: E0710 00:03:43.777002 3298 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-94958988c-c7snx_calico-apiserver(17db8b91-9713-4ad3-8e2a-e7a1b996f01d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-94958988c-c7snx_calico-apiserver(17db8b91-9713-4ad3-8e2a-e7a1b996f01d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ac67edd8f81be905087d470f4a837ea7333b9be0954223f9419a6cf682d2da7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94958988c-c7snx" podUID="17db8b91-9713-4ad3-8e2a-e7a1b996f01d" Jul 10 00:03:43.784141 containerd[1990]: time="2025-07-10T00:03:43.784028829Z" level=error msg="Failed to destroy network for sandbox \"810e25ed595536030ad486a4be8776a88affaa59cc072997f5603709ef73ec2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.791219 containerd[1990]: time="2025-07-10T00:03:43.791108109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rs7g9,Uid:36ad65bb-9edb-4db0-9097-ab8516085854,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"810e25ed595536030ad486a4be8776a88affaa59cc072997f5603709ef73ec2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.791766 kubelet[3298]: E0710 00:03:43.791451 3298 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"810e25ed595536030ad486a4be8776a88affaa59cc072997f5603709ef73ec2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.791766 kubelet[3298]: E0710 00:03:43.791523 3298 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"810e25ed595536030ad486a4be8776a88affaa59cc072997f5603709ef73ec2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rs7g9" Jul 10 00:03:43.791766 kubelet[3298]: E0710 00:03:43.791563 3298 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"810e25ed595536030ad486a4be8776a88affaa59cc072997f5603709ef73ec2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rs7g9" Jul 10 00:03:43.792363 kubelet[3298]: E0710 00:03:43.791633 3298 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rs7g9_kube-system(36ad65bb-9edb-4db0-9097-ab8516085854)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rs7g9_kube-system(36ad65bb-9edb-4db0-9097-ab8516085854)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"810e25ed595536030ad486a4be8776a88affaa59cc072997f5603709ef73ec2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rs7g9" podUID="36ad65bb-9edb-4db0-9097-ab8516085854" Jul 10 00:03:43.797341 containerd[1990]: time="2025-07-10T00:03:43.797269233Z" level=error msg="Failed to destroy network for sandbox \"eac3cecdc87ef20b59720a16bb4e80d91890fbef3f2d07141e53729264a46f94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.801470 containerd[1990]: time="2025-07-10T00:03:43.801145149Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-hvkcr,Uid:96d8309c-c796-4514-84c5-6e5f9f3dca37,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eac3cecdc87ef20b59720a16bb4e80d91890fbef3f2d07141e53729264a46f94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.802330 kubelet[3298]: E0710 00:03:43.801770 3298 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eac3cecdc87ef20b59720a16bb4e80d91890fbef3f2d07141e53729264a46f94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.802330 kubelet[3298]: E0710 00:03:43.801846 3298 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eac3cecdc87ef20b59720a16bb4e80d91890fbef3f2d07141e53729264a46f94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-hvkcr" Jul 10 00:03:43.802330 kubelet[3298]: E0710 00:03:43.801879 3298 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eac3cecdc87ef20b59720a16bb4e80d91890fbef3f2d07141e53729264a46f94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-hvkcr" Jul 10 00:03:43.802709 kubelet[3298]: E0710 00:03:43.801954 3298 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-hvkcr_calico-system(96d8309c-c796-4514-84c5-6e5f9f3dca37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-hvkcr_calico-system(96d8309c-c796-4514-84c5-6e5f9f3dca37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eac3cecdc87ef20b59720a16bb4e80d91890fbef3f2d07141e53729264a46f94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-hvkcr" podUID="96d8309c-c796-4514-84c5-6e5f9f3dca37" Jul 10 00:03:43.823100 containerd[1990]: time="2025-07-10T00:03:43.823029009Z" level=error msg="Failed to destroy network for sandbox \"2d5ca980e73b58342d794cb7b527b763ea6e3ba0ab7fedebcdc00ce4aee6ef30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.825941 containerd[1990]: time="2025-07-10T00:03:43.825861993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94958988c-ktf4v,Uid:bc7c95fb-936f-48c1-97af-af78b053d354,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d5ca980e73b58342d794cb7b527b763ea6e3ba0ab7fedebcdc00ce4aee6ef30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.829835 kubelet[3298]: E0710 00:03:43.827931 3298 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d5ca980e73b58342d794cb7b527b763ea6e3ba0ab7fedebcdc00ce4aee6ef30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.829835 kubelet[3298]: E0710 00:03:43.828005 3298 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d5ca980e73b58342d794cb7b527b763ea6e3ba0ab7fedebcdc00ce4aee6ef30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94958988c-ktf4v" Jul 10 00:03:43.829835 kubelet[3298]: E0710 00:03:43.828036 3298 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d5ca980e73b58342d794cb7b527b763ea6e3ba0ab7fedebcdc00ce4aee6ef30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94958988c-ktf4v" Jul 10 00:03:43.830047 kubelet[3298]: E0710 00:03:43.828107 3298 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-94958988c-ktf4v_calico-apiserver(bc7c95fb-936f-48c1-97af-af78b053d354)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-94958988c-ktf4v_calico-apiserver(bc7c95fb-936f-48c1-97af-af78b053d354)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d5ca980e73b58342d794cb7b527b763ea6e3ba0ab7fedebcdc00ce4aee6ef30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94958988c-ktf4v" podUID="bc7c95fb-936f-48c1-97af-af78b053d354" Jul 10 00:03:43.831597 containerd[1990]: time="2025-07-10T00:03:43.831155073Z" level=error msg="Failed to destroy network for sandbox \"f59be8af52fa2c59c253b198a7765600a161c6c6532a010d3b42a75a15631558\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.834917 containerd[1990]: time="2025-07-10T00:03:43.834837657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86947c6954-mcnnk,Uid:4fe184b3-b13a-4c89-bc66-4b307fc7f633,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59be8af52fa2c59c253b198a7765600a161c6c6532a010d3b42a75a15631558\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.839412 kubelet[3298]: E0710 00:03:43.837263 3298 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59be8af52fa2c59c253b198a7765600a161c6c6532a010d3b42a75a15631558\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:43.839412 kubelet[3298]: E0710 00:03:43.837337 3298 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59be8af52fa2c59c253b198a7765600a161c6c6532a010d3b42a75a15631558\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86947c6954-mcnnk" Jul 10 00:03:43.839791 kubelet[3298]: E0710 00:03:43.837376 3298 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59be8af52fa2c59c253b198a7765600a161c6c6532a010d3b42a75a15631558\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86947c6954-mcnnk" Jul 10 00:03:43.840116 kubelet[3298]: E0710 00:03:43.839750 3298 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-86947c6954-mcnnk_calico-system(4fe184b3-b13a-4c89-bc66-4b307fc7f633)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-86947c6954-mcnnk_calico-system(4fe184b3-b13a-4c89-bc66-4b307fc7f633)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f59be8af52fa2c59c253b198a7765600a161c6c6532a010d3b42a75a15631558\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-86947c6954-mcnnk" podUID="4fe184b3-b13a-4c89-bc66-4b307fc7f633" Jul 10 00:03:43.855509 systemd[1]: run-netns-cni\x2db0e62a5e\x2df9d5\x2ddee5\x2d8d99\x2d8b655c1f3390.mount: Deactivated successfully. Jul 10 00:03:43.855711 systemd[1]: run-netns-cni\x2def0e0c86\x2d7fa6\x2d69f4\x2d8862\x2d538d16cde8b1.mount: Deactivated successfully. Jul 10 00:03:43.926925 containerd[1990]: time="2025-07-10T00:03:43.926131161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 00:03:44.576066 systemd[1]: Created slice kubepods-besteffort-pod63526049_3309_4f65_ad78_b95e459a7f01.slice - libcontainer container kubepods-besteffort-pod63526049_3309_4f65_ad78_b95e459a7f01.slice. Jul 10 00:03:44.581450 containerd[1990]: time="2025-07-10T00:03:44.581367801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvfdh,Uid:63526049-3309-4f65-ad78-b95e459a7f01,Namespace:calico-system,Attempt:0,}" Jul 10 00:03:44.673864 containerd[1990]: time="2025-07-10T00:03:44.673641213Z" level=error msg="Failed to destroy network for sandbox \"2647d648c46332e01095556f4f2307c0196476ac5cad218d865452f431b2b1d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:44.678381 systemd[1]: run-netns-cni\x2d207386cb\x2dc966\x2d56a2\x2d9905\x2d4f36cf458fc4.mount: Deactivated successfully. Jul 10 00:03:44.685342 containerd[1990]: time="2025-07-10T00:03:44.685265661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvfdh,Uid:63526049-3309-4f65-ad78-b95e459a7f01,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2647d648c46332e01095556f4f2307c0196476ac5cad218d865452f431b2b1d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:44.685669 kubelet[3298]: E0710 00:03:44.685604 3298 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2647d648c46332e01095556f4f2307c0196476ac5cad218d865452f431b2b1d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:03:44.685747 kubelet[3298]: E0710 00:03:44.685686 3298 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2647d648c46332e01095556f4f2307c0196476ac5cad218d865452f431b2b1d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cvfdh" Jul 10 00:03:44.685747 kubelet[3298]: E0710 00:03:44.685722 3298 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2647d648c46332e01095556f4f2307c0196476ac5cad218d865452f431b2b1d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cvfdh" Jul 10 00:03:44.685869 kubelet[3298]: E0710 00:03:44.685787 3298 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cvfdh_calico-system(63526049-3309-4f65-ad78-b95e459a7f01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cvfdh_calico-system(63526049-3309-4f65-ad78-b95e459a7f01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2647d648c46332e01095556f4f2307c0196476ac5cad218d865452f431b2b1d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cvfdh" podUID="63526049-3309-4f65-ad78-b95e459a7f01" Jul 10 00:03:52.104550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1090543888.mount: Deactivated successfully. Jul 10 00:03:52.183782 containerd[1990]: time="2025-07-10T00:03:52.183724214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:52.185908 containerd[1990]: time="2025-07-10T00:03:52.185852174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 10 00:03:52.187989 containerd[1990]: time="2025-07-10T00:03:52.187913594Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:52.192150 containerd[1990]: time="2025-07-10T00:03:52.192069902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:52.193375 containerd[1990]: time="2025-07-10T00:03:52.193167938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 8.266674605s" Jul 10 00:03:52.193375 containerd[1990]: time="2025-07-10T00:03:52.193226678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 10 00:03:52.237590 containerd[1990]: time="2025-07-10T00:03:52.237528711Z" level=info msg="CreateContainer within sandbox \"ef411aa4a287d53acdf6c804318bb52634a22b15e362803f684d583587bca3df\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 00:03:52.265630 containerd[1990]: time="2025-07-10T00:03:52.263288355Z" level=info msg="Container 12f71c73efd80f2b2b5577d3bc8f52d120989e4446d381b182dc3178fe735e46: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:03:52.288855 containerd[1990]: time="2025-07-10T00:03:52.288777591Z" level=info msg="CreateContainer within sandbox \"ef411aa4a287d53acdf6c804318bb52634a22b15e362803f684d583587bca3df\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"12f71c73efd80f2b2b5577d3bc8f52d120989e4446d381b182dc3178fe735e46\"" Jul 10 00:03:52.290018 containerd[1990]: time="2025-07-10T00:03:52.289709463Z" level=info msg="StartContainer for \"12f71c73efd80f2b2b5577d3bc8f52d120989e4446d381b182dc3178fe735e46\"" Jul 10 00:03:52.296216 containerd[1990]: time="2025-07-10T00:03:52.296161731Z" level=info msg="connecting to shim 12f71c73efd80f2b2b5577d3bc8f52d120989e4446d381b182dc3178fe735e46" address="unix:///run/containerd/s/58959c66e56ffde6ffefb43820bd66d7037a57ab2af652efacc32bcae41080e2" protocol=ttrpc version=3 Jul 10 00:03:52.329694 systemd[1]: Started cri-containerd-12f71c73efd80f2b2b5577d3bc8f52d120989e4446d381b182dc3178fe735e46.scope - libcontainer container 12f71c73efd80f2b2b5577d3bc8f52d120989e4446d381b182dc3178fe735e46. Jul 10 00:03:52.429810 containerd[1990]: time="2025-07-10T00:03:52.429625540Z" level=info msg="StartContainer for \"12f71c73efd80f2b2b5577d3bc8f52d120989e4446d381b182dc3178fe735e46\" returns successfully" Jul 10 00:03:52.674253 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 00:03:52.674426 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 00:03:52.930648 kubelet[3298]: I0710 00:03:52.930574 3298 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4fe184b3-b13a-4c89-bc66-4b307fc7f633-whisker-backend-key-pair\") pod \"4fe184b3-b13a-4c89-bc66-4b307fc7f633\" (UID: \"4fe184b3-b13a-4c89-bc66-4b307fc7f633\") " Jul 10 00:03:52.931165 kubelet[3298]: I0710 00:03:52.930970 3298 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlt5d\" (UniqueName: \"kubernetes.io/projected/4fe184b3-b13a-4c89-bc66-4b307fc7f633-kube-api-access-xlt5d\") pod \"4fe184b3-b13a-4c89-bc66-4b307fc7f633\" (UID: \"4fe184b3-b13a-4c89-bc66-4b307fc7f633\") " Jul 10 00:03:52.933957 kubelet[3298]: I0710 00:03:52.932410 3298 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fe184b3-b13a-4c89-bc66-4b307fc7f633-whisker-ca-bundle\") pod \"4fe184b3-b13a-4c89-bc66-4b307fc7f633\" (UID: \"4fe184b3-b13a-4c89-bc66-4b307fc7f633\") " Jul 10 00:03:52.935707 kubelet[3298]: I0710 00:03:52.935630 3298 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fe184b3-b13a-4c89-bc66-4b307fc7f633-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4fe184b3-b13a-4c89-bc66-4b307fc7f633" (UID: "4fe184b3-b13a-4c89-bc66-4b307fc7f633"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:03:52.947082 kubelet[3298]: I0710 00:03:52.947000 3298 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe184b3-b13a-4c89-bc66-4b307fc7f633-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4fe184b3-b13a-4c89-bc66-4b307fc7f633" (UID: "4fe184b3-b13a-4c89-bc66-4b307fc7f633"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:03:52.947505 kubelet[3298]: I0710 00:03:52.947453 3298 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe184b3-b13a-4c89-bc66-4b307fc7f633-kube-api-access-xlt5d" (OuterVolumeSpecName: "kube-api-access-xlt5d") pod "4fe184b3-b13a-4c89-bc66-4b307fc7f633" (UID: "4fe184b3-b13a-4c89-bc66-4b307fc7f633"). InnerVolumeSpecName "kube-api-access-xlt5d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:03:53.032469 systemd[1]: Removed slice kubepods-besteffort-pod4fe184b3_b13a_4c89_bc66_4b307fc7f633.slice - libcontainer container kubepods-besteffort-pod4fe184b3_b13a_4c89_bc66_4b307fc7f633.slice. Jul 10 00:03:53.033900 kubelet[3298]: I0710 00:03:53.033845 3298 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4fe184b3-b13a-4c89-bc66-4b307fc7f633-whisker-backend-key-pair\") on node \"ip-172-31-25-230\" DevicePath \"\"" Jul 10 00:03:53.034005 kubelet[3298]: I0710 00:03:53.033900 3298 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xlt5d\" (UniqueName: \"kubernetes.io/projected/4fe184b3-b13a-4c89-bc66-4b307fc7f633-kube-api-access-xlt5d\") on node \"ip-172-31-25-230\" DevicePath \"\"" Jul 10 00:03:53.034005 kubelet[3298]: I0710 00:03:53.033926 3298 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fe184b3-b13a-4c89-bc66-4b307fc7f633-whisker-ca-bundle\") on node \"ip-172-31-25-230\" DevicePath \"\"" Jul 10 00:03:53.077006 kubelet[3298]: I0710 00:03:53.076814 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-llk65" podStartSLOduration=2.508439746 podStartE2EDuration="22.076768839s" podCreationTimestamp="2025-07-10 00:03:31 +0000 UTC" firstStartedPulling="2025-07-10 00:03:32.626586369 +0000 UTC m=+32.275634081" lastFinishedPulling="2025-07-10 00:03:52.194915462 +0000 UTC m=+51.843963174" observedRunningTime="2025-07-10 00:03:53.076095963 +0000 UTC m=+52.725143675" watchObservedRunningTime="2025-07-10 00:03:53.076768839 +0000 UTC m=+52.725816539" Jul 10 00:03:53.110945 systemd[1]: var-lib-kubelet-pods-4fe184b3\x2db13a\x2d4c89\x2dbc66\x2d4b307fc7f633-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxlt5d.mount: Deactivated successfully. Jul 10 00:03:53.111149 systemd[1]: var-lib-kubelet-pods-4fe184b3\x2db13a\x2d4c89\x2dbc66\x2d4b307fc7f633-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 00:03:53.233210 systemd[1]: Created slice kubepods-besteffort-podd4aefa09_ea1e_4c47_8fe0_e2887aa40f11.slice - libcontainer container kubepods-besteffort-podd4aefa09_ea1e_4c47_8fe0_e2887aa40f11.slice. Jul 10 00:03:53.335610 kubelet[3298]: I0710 00:03:53.335530 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d4aefa09-ea1e-4c47-8fe0-e2887aa40f11-whisker-backend-key-pair\") pod \"whisker-64c668f858-g2g5j\" (UID: \"d4aefa09-ea1e-4c47-8fe0-e2887aa40f11\") " pod="calico-system/whisker-64c668f858-g2g5j" Jul 10 00:03:53.335769 kubelet[3298]: I0710 00:03:53.335625 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k6fg\" (UniqueName: \"kubernetes.io/projected/d4aefa09-ea1e-4c47-8fe0-e2887aa40f11-kube-api-access-2k6fg\") pod \"whisker-64c668f858-g2g5j\" (UID: \"d4aefa09-ea1e-4c47-8fe0-e2887aa40f11\") " pod="calico-system/whisker-64c668f858-g2g5j" Jul 10 00:03:53.335769 kubelet[3298]: I0710 00:03:53.335706 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4aefa09-ea1e-4c47-8fe0-e2887aa40f11-whisker-ca-bundle\") pod \"whisker-64c668f858-g2g5j\" (UID: \"d4aefa09-ea1e-4c47-8fe0-e2887aa40f11\") " pod="calico-system/whisker-64c668f858-g2g5j" Jul 10 00:03:53.543680 containerd[1990]: time="2025-07-10T00:03:53.543449549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64c668f858-g2g5j,Uid:d4aefa09-ea1e-4c47-8fe0-e2887aa40f11,Namespace:calico-system,Attempt:0,}" Jul 10 00:03:53.654029 containerd[1990]: time="2025-07-10T00:03:53.653964810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12f71c73efd80f2b2b5577d3bc8f52d120989e4446d381b182dc3178fe735e46\" id:\"59dbdbb28c68260924c8db23cf1b649b47013be4c6a1ffabfd1ebdac115d576d\" pid:4538 exit_status:1 exited_at:{seconds:1752105833 nanos:652734834}" Jul 10 00:03:53.906566 (udev-worker)[4510]: Network interface NamePolicy= disabled on kernel command line. Jul 10 00:03:53.910876 systemd-networkd[1885]: cali601ec9fff03: Link UP Jul 10 00:03:53.913272 systemd-networkd[1885]: cali601ec9fff03: Gained carrier Jul 10 00:03:53.940484 containerd[1990]: 2025-07-10 00:03:53.628 [INFO][4553] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:03:53.940484 containerd[1990]: 2025-07-10 00:03:53.734 [INFO][4553] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--230-k8s-whisker--64c668f858--g2g5j-eth0 whisker-64c668f858- calico-system d4aefa09-ea1e-4c47-8fe0-e2887aa40f11 926 0 2025-07-10 00:03:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64c668f858 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-25-230 whisker-64c668f858-g2g5j eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali601ec9fff03 [] [] }} ContainerID="5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" Namespace="calico-system" Pod="whisker-64c668f858-g2g5j" WorkloadEndpoint="ip--172--31--25--230-k8s-whisker--64c668f858--g2g5j-" Jul 10 00:03:53.940484 containerd[1990]: 2025-07-10 00:03:53.735 [INFO][4553] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" Namespace="calico-system" Pod="whisker-64c668f858-g2g5j" WorkloadEndpoint="ip--172--31--25--230-k8s-whisker--64c668f858--g2g5j-eth0" Jul 10 00:03:53.940484 containerd[1990]: 2025-07-10 00:03:53.822 [INFO][4575] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" HandleID="k8s-pod-network.5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" Workload="ip--172--31--25--230-k8s-whisker--64c668f858--g2g5j-eth0" Jul 10 00:03:53.940889 containerd[1990]: 2025-07-10 00:03:53.822 [INFO][4575] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" HandleID="k8s-pod-network.5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" Workload="ip--172--31--25--230-k8s-whisker--64c668f858--g2g5j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000378750), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-230", "pod":"whisker-64c668f858-g2g5j", "timestamp":"2025-07-10 00:03:53.822680359 +0000 UTC"}, Hostname:"ip-172-31-25-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:03:53.940889 containerd[1990]: 2025-07-10 00:03:53.823 [INFO][4575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:03:53.940889 containerd[1990]: 2025-07-10 00:03:53.823 [INFO][4575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:03:53.940889 containerd[1990]: 2025-07-10 00:03:53.823 [INFO][4575] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-230' Jul 10 00:03:53.940889 containerd[1990]: 2025-07-10 00:03:53.840 [INFO][4575] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" host="ip-172-31-25-230" Jul 10 00:03:53.940889 containerd[1990]: 2025-07-10 00:03:53.849 [INFO][4575] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-230" Jul 10 00:03:53.940889 containerd[1990]: 2025-07-10 00:03:53.857 [INFO][4575] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:53.940889 containerd[1990]: 2025-07-10 00:03:53.860 [INFO][4575] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:53.940889 containerd[1990]: 2025-07-10 00:03:53.864 [INFO][4575] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:53.941340 containerd[1990]: 2025-07-10 00:03:53.864 [INFO][4575] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" host="ip-172-31-25-230" Jul 10 00:03:53.941340 containerd[1990]: 2025-07-10 00:03:53.867 [INFO][4575] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583 Jul 10 00:03:53.941340 containerd[1990]: 2025-07-10 00:03:53.873 [INFO][4575] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" host="ip-172-31-25-230" Jul 10 00:03:53.941340 containerd[1990]: 2025-07-10 00:03:53.887 [INFO][4575] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.193/26] block=192.168.100.192/26 handle="k8s-pod-network.5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" host="ip-172-31-25-230" Jul 10 00:03:53.941340 containerd[1990]: 2025-07-10 00:03:53.887 [INFO][4575] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.193/26] handle="k8s-pod-network.5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" host="ip-172-31-25-230" Jul 10 00:03:53.941340 containerd[1990]: 2025-07-10 00:03:53.887 [INFO][4575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:03:53.941340 containerd[1990]: 2025-07-10 00:03:53.887 [INFO][4575] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.193/26] IPv6=[] ContainerID="5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" HandleID="k8s-pod-network.5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" Workload="ip--172--31--25--230-k8s-whisker--64c668f858--g2g5j-eth0" Jul 10 00:03:53.942418 containerd[1990]: 2025-07-10 00:03:53.894 [INFO][4553] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" Namespace="calico-system" Pod="whisker-64c668f858-g2g5j" WorkloadEndpoint="ip--172--31--25--230-k8s-whisker--64c668f858--g2g5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--230-k8s-whisker--64c668f858--g2g5j-eth0", GenerateName:"whisker-64c668f858-", Namespace:"calico-system", SelfLink:"", UID:"d4aefa09-ea1e-4c47-8fe0-e2887aa40f11", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 3, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64c668f858", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-230", ContainerID:"", Pod:"whisker-64c668f858-g2g5j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali601ec9fff03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:03:53.942418 containerd[1990]: 2025-07-10 00:03:53.894 [INFO][4553] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.193/32] ContainerID="5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" Namespace="calico-system" Pod="whisker-64c668f858-g2g5j" WorkloadEndpoint="ip--172--31--25--230-k8s-whisker--64c668f858--g2g5j-eth0" Jul 10 00:03:53.942742 containerd[1990]: 2025-07-10 00:03:53.894 [INFO][4553] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali601ec9fff03 ContainerID="5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" Namespace="calico-system" Pod="whisker-64c668f858-g2g5j" WorkloadEndpoint="ip--172--31--25--230-k8s-whisker--64c668f858--g2g5j-eth0" Jul 10 00:03:53.942742 containerd[1990]: 2025-07-10 00:03:53.912 [INFO][4553] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" Namespace="calico-system" Pod="whisker-64c668f858-g2g5j" WorkloadEndpoint="ip--172--31--25--230-k8s-whisker--64c668f858--g2g5j-eth0" Jul 10 00:03:53.942966 containerd[1990]: 2025-07-10 00:03:53.914 [INFO][4553] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" Namespace="calico-system" Pod="whisker-64c668f858-g2g5j" WorkloadEndpoint="ip--172--31--25--230-k8s-whisker--64c668f858--g2g5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--230-k8s-whisker--64c668f858--g2g5j-eth0", GenerateName:"whisker-64c668f858-", Namespace:"calico-system", SelfLink:"", UID:"d4aefa09-ea1e-4c47-8fe0-e2887aa40f11", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 3, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64c668f858", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-230", ContainerID:"5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583", Pod:"whisker-64c668f858-g2g5j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali601ec9fff03", MAC:"c2:47:a4:52:b7:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:03:53.943236 containerd[1990]: 2025-07-10 00:03:53.930 [INFO][4553] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" Namespace="calico-system" Pod="whisker-64c668f858-g2g5j" WorkloadEndpoint="ip--172--31--25--230-k8s-whisker--64c668f858--g2g5j-eth0" Jul 10 00:03:53.988721 containerd[1990]: time="2025-07-10T00:03:53.988649599Z" level=info msg="connecting to shim 5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583" address="unix:///run/containerd/s/1319406ee65e95b0bd1e21a817369723b1ea4ce35e200d392f78bc0137a94558" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:03:54.038722 systemd[1]: Started cri-containerd-5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583.scope - libcontainer container 5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583. Jul 10 00:03:54.149272 containerd[1990]: time="2025-07-10T00:03:54.149171452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64c668f858-g2g5j,Uid:d4aefa09-ea1e-4c47-8fe0-e2887aa40f11,Namespace:calico-system,Attempt:0,} returns sandbox id \"5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583\"" Jul 10 00:03:54.153796 containerd[1990]: time="2025-07-10T00:03:54.153732064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 00:03:54.194852 containerd[1990]: time="2025-07-10T00:03:54.194700220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12f71c73efd80f2b2b5577d3bc8f52d120989e4446d381b182dc3178fe735e46\" id:\"5c101b3b909c06f7fd19441cd71fe310d627e03ecf688a0c11c1b7a2cde4b30a\" pid:4631 exit_status:1 exited_at:{seconds:1752105834 nanos:193865356}" Jul 10 00:03:54.568959 kubelet[3298]: I0710 00:03:54.568892 3298 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe184b3-b13a-4c89-bc66-4b307fc7f633" path="/var/lib/kubelet/pods/4fe184b3-b13a-4c89-bc66-4b307fc7f633/volumes" Jul 10 00:03:55.073507 systemd-networkd[1885]: cali601ec9fff03: Gained IPv6LL Jul 10 00:03:55.566812 containerd[1990]: time="2025-07-10T00:03:55.566537491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-hvkcr,Uid:96d8309c-c796-4514-84c5-6e5f9f3dca37,Namespace:calico-system,Attempt:0,}" Jul 10 00:03:55.748757 containerd[1990]: time="2025-07-10T00:03:55.748137416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:55.751049 containerd[1990]: time="2025-07-10T00:03:55.750968876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 10 00:03:55.754531 containerd[1990]: time="2025-07-10T00:03:55.754477616Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:55.775447 containerd[1990]: time="2025-07-10T00:03:55.775281524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:03:55.777898 containerd[1990]: time="2025-07-10T00:03:55.777815252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.6240166s" Jul 10 00:03:55.777898 containerd[1990]: time="2025-07-10T00:03:55.777882500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 10 00:03:55.788110 containerd[1990]: time="2025-07-10T00:03:55.787930220Z" level=info msg="CreateContainer within sandbox \"5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 00:03:55.825093 containerd[1990]: time="2025-07-10T00:03:55.824941544Z" level=info msg="Container 4c496236ca27b095037d5703071ad10ead56f555ad90054971391c5395e2938f: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:03:55.863003 containerd[1990]: time="2025-07-10T00:03:55.859679145Z" level=info msg="CreateContainer within sandbox \"5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4c496236ca27b095037d5703071ad10ead56f555ad90054971391c5395e2938f\"" Jul 10 00:03:55.864550 containerd[1990]: time="2025-07-10T00:03:55.864472449Z" level=info msg="StartContainer for \"4c496236ca27b095037d5703071ad10ead56f555ad90054971391c5395e2938f\"" Jul 10 00:03:55.873196 containerd[1990]: time="2025-07-10T00:03:55.872840277Z" level=info msg="connecting to shim 4c496236ca27b095037d5703071ad10ead56f555ad90054971391c5395e2938f" address="unix:///run/containerd/s/1319406ee65e95b0bd1e21a817369723b1ea4ce35e200d392f78bc0137a94558" protocol=ttrpc version=3 Jul 10 00:03:55.957060 systemd[1]: Started cri-containerd-4c496236ca27b095037d5703071ad10ead56f555ad90054971391c5395e2938f.scope - libcontainer container 4c496236ca27b095037d5703071ad10ead56f555ad90054971391c5395e2938f. Jul 10 00:03:56.038917 (udev-worker)[4509]: Network interface NamePolicy= disabled on kernel command line. Jul 10 00:03:56.043779 systemd-networkd[1885]: cali764f74568d8: Link UP Jul 10 00:03:56.045528 systemd-networkd[1885]: cali764f74568d8: Gained carrier Jul 10 00:03:56.092024 containerd[1990]: 2025-07-10 00:03:55.744 [INFO][4758] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--230-k8s-goldmane--768f4c5c69--hvkcr-eth0 goldmane-768f4c5c69- calico-system 96d8309c-c796-4514-84c5-6e5f9f3dca37 857 0 2025-07-10 00:03:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-25-230 goldmane-768f4c5c69-hvkcr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali764f74568d8 [] [] }} ContainerID="3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" Namespace="calico-system" Pod="goldmane-768f4c5c69-hvkcr" WorkloadEndpoint="ip--172--31--25--230-k8s-goldmane--768f4c5c69--hvkcr-" Jul 10 00:03:56.092024 containerd[1990]: 2025-07-10 00:03:55.745 [INFO][4758] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" Namespace="calico-system" Pod="goldmane-768f4c5c69-hvkcr" WorkloadEndpoint="ip--172--31--25--230-k8s-goldmane--768f4c5c69--hvkcr-eth0" Jul 10 00:03:56.092024 containerd[1990]: 2025-07-10 00:03:55.859 [INFO][4776] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" HandleID="k8s-pod-network.3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" Workload="ip--172--31--25--230-k8s-goldmane--768f4c5c69--hvkcr-eth0" Jul 10 00:03:56.093178 containerd[1990]: 2025-07-10 00:03:55.859 [INFO][4776] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" HandleID="k8s-pod-network.3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" Workload="ip--172--31--25--230-k8s-goldmane--768f4c5c69--hvkcr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103d70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-230", "pod":"goldmane-768f4c5c69-hvkcr", "timestamp":"2025-07-10 00:03:55.859681209 +0000 UTC"}, Hostname:"ip-172-31-25-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:03:56.093178 containerd[1990]: 2025-07-10 00:03:55.860 [INFO][4776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:03:56.093178 containerd[1990]: 2025-07-10 00:03:55.860 [INFO][4776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:03:56.093178 containerd[1990]: 2025-07-10 00:03:55.860 [INFO][4776] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-230' Jul 10 00:03:56.093178 containerd[1990]: 2025-07-10 00:03:55.900 [INFO][4776] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" host="ip-172-31-25-230" Jul 10 00:03:56.093178 containerd[1990]: 2025-07-10 00:03:55.916 [INFO][4776] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-230" Jul 10 00:03:56.093178 containerd[1990]: 2025-07-10 00:03:55.935 [INFO][4776] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:56.093178 containerd[1990]: 2025-07-10 00:03:55.940 [INFO][4776] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:56.093178 containerd[1990]: 2025-07-10 00:03:55.951 [INFO][4776] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:56.094104 containerd[1990]: 2025-07-10 00:03:55.951 [INFO][4776] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" host="ip-172-31-25-230" Jul 10 00:03:56.094104 containerd[1990]: 2025-07-10 00:03:55.958 [INFO][4776] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee Jul 10 00:03:56.094104 containerd[1990]: 2025-07-10 00:03:55.980 [INFO][4776] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" host="ip-172-31-25-230" Jul 10 00:03:56.094104 containerd[1990]: 2025-07-10 00:03:56.001 [INFO][4776] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.194/26] block=192.168.100.192/26 handle="k8s-pod-network.3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" host="ip-172-31-25-230" Jul 10 00:03:56.094104 containerd[1990]: 2025-07-10 00:03:56.001 [INFO][4776] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.194/26] handle="k8s-pod-network.3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" host="ip-172-31-25-230" Jul 10 00:03:56.094104 containerd[1990]: 2025-07-10 00:03:56.001 [INFO][4776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:03:56.094104 containerd[1990]: 2025-07-10 00:03:56.001 [INFO][4776] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.194/26] IPv6=[] ContainerID="3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" HandleID="k8s-pod-network.3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" Workload="ip--172--31--25--230-k8s-goldmane--768f4c5c69--hvkcr-eth0" Jul 10 00:03:56.094684 containerd[1990]: 2025-07-10 00:03:56.023 [INFO][4758] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" Namespace="calico-system" Pod="goldmane-768f4c5c69-hvkcr" WorkloadEndpoint="ip--172--31--25--230-k8s-goldmane--768f4c5c69--hvkcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--230-k8s-goldmane--768f4c5c69--hvkcr-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"96d8309c-c796-4514-84c5-6e5f9f3dca37", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 3, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-230", ContainerID:"", Pod:"goldmane-768f4c5c69-hvkcr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali764f74568d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:03:56.094684 containerd[1990]: 2025-07-10 00:03:56.023 [INFO][4758] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.194/32] ContainerID="3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" Namespace="calico-system" Pod="goldmane-768f4c5c69-hvkcr" WorkloadEndpoint="ip--172--31--25--230-k8s-goldmane--768f4c5c69--hvkcr-eth0" Jul 10 00:03:56.095136 containerd[1990]: 2025-07-10 00:03:56.027 [INFO][4758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali764f74568d8 ContainerID="3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" Namespace="calico-system" Pod="goldmane-768f4c5c69-hvkcr" WorkloadEndpoint="ip--172--31--25--230-k8s-goldmane--768f4c5c69--hvkcr-eth0" Jul 10 00:03:56.095136 containerd[1990]: 2025-07-10 00:03:56.046 [INFO][4758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" Namespace="calico-system" Pod="goldmane-768f4c5c69-hvkcr" WorkloadEndpoint="ip--172--31--25--230-k8s-goldmane--768f4c5c69--hvkcr-eth0" Jul 10 00:03:56.095380 containerd[1990]: 2025-07-10 00:03:56.046 [INFO][4758] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" Namespace="calico-system" Pod="goldmane-768f4c5c69-hvkcr" WorkloadEndpoint="ip--172--31--25--230-k8s-goldmane--768f4c5c69--hvkcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--230-k8s-goldmane--768f4c5c69--hvkcr-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"96d8309c-c796-4514-84c5-6e5f9f3dca37", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 3, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-230", ContainerID:"3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee", Pod:"goldmane-768f4c5c69-hvkcr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali764f74568d8", MAC:"f2:40:f0:4e:20:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:03:56.095685 containerd[1990]: 2025-07-10 00:03:56.087 [INFO][4758] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" Namespace="calico-system" Pod="goldmane-768f4c5c69-hvkcr" WorkloadEndpoint="ip--172--31--25--230-k8s-goldmane--768f4c5c69--hvkcr-eth0" Jul 10 00:03:56.165784 containerd[1990]: time="2025-07-10T00:03:56.165721626Z" level=info msg="StartContainer for \"4c496236ca27b095037d5703071ad10ead56f555ad90054971391c5395e2938f\" returns successfully" Jul 10 00:03:56.172303 containerd[1990]: time="2025-07-10T00:03:56.172233486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 00:03:56.186416 containerd[1990]: time="2025-07-10T00:03:56.185763786Z" level=info msg="connecting to shim 3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee" address="unix:///run/containerd/s/0c75c9f497bb54c0192fee88d7955169eaff684b81bea6aaa70a1bc8c6539d6f" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:03:56.262753 systemd[1]: Started cri-containerd-3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee.scope - libcontainer container 3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee. Jul 10 00:03:56.360126 containerd[1990]: time="2025-07-10T00:03:56.358557307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-hvkcr,Uid:96d8309c-c796-4514-84c5-6e5f9f3dca37,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee\"" Jul 10 00:03:56.548658 systemd-networkd[1885]: vxlan.calico: Link UP Jul 10 00:03:56.548673 systemd-networkd[1885]: vxlan.calico: Gained carrier Jul 10 00:03:56.569557 containerd[1990]: time="2025-07-10T00:03:56.568095032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94958988c-ktf4v,Uid:bc7c95fb-936f-48c1-97af-af78b053d354,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:03:56.871729 (udev-worker)[4931]: Network interface NamePolicy= disabled on kernel command line. Jul 10 00:03:56.872918 systemd-networkd[1885]: cali88dc6811e27: Link UP Jul 10 00:03:56.874621 systemd-networkd[1885]: cali88dc6811e27: Gained carrier Jul 10 00:03:56.915355 containerd[1990]: 2025-07-10 00:03:56.711 [INFO][4909] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--230-k8s-calico--apiserver--94958988c--ktf4v-eth0 calico-apiserver-94958988c- calico-apiserver bc7c95fb-936f-48c1-97af-af78b053d354 862 0 2025-07-10 00:03:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:94958988c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-230 calico-apiserver-94958988c-ktf4v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali88dc6811e27 [] [] }} ContainerID="b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" Namespace="calico-apiserver" Pod="calico-apiserver-94958988c-ktf4v" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--apiserver--94958988c--ktf4v-" Jul 10 00:03:56.915355 containerd[1990]: 2025-07-10 00:03:56.711 [INFO][4909] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" Namespace="calico-apiserver" Pod="calico-apiserver-94958988c-ktf4v" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--apiserver--94958988c--ktf4v-eth0" Jul 10 00:03:56.915355 containerd[1990]: 2025-07-10 00:03:56.770 [INFO][4936] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" HandleID="k8s-pod-network.b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" Workload="ip--172--31--25--230-k8s-calico--apiserver--94958988c--ktf4v-eth0" Jul 10 00:03:56.915770 containerd[1990]: 2025-07-10 00:03:56.771 [INFO][4936] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" HandleID="k8s-pod-network.b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" Workload="ip--172--31--25--230-k8s-calico--apiserver--94958988c--ktf4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3670), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-230", "pod":"calico-apiserver-94958988c-ktf4v", "timestamp":"2025-07-10 00:03:56.770862981 +0000 UTC"}, Hostname:"ip-172-31-25-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:03:56.915770 containerd[1990]: 2025-07-10 00:03:56.771 [INFO][4936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:03:56.915770 containerd[1990]: 2025-07-10 00:03:56.771 [INFO][4936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:03:56.915770 containerd[1990]: 2025-07-10 00:03:56.771 [INFO][4936] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-230' Jul 10 00:03:56.915770 containerd[1990]: 2025-07-10 00:03:56.786 [INFO][4936] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" host="ip-172-31-25-230" Jul 10 00:03:56.915770 containerd[1990]: 2025-07-10 00:03:56.794 [INFO][4936] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-230" Jul 10 00:03:56.915770 containerd[1990]: 2025-07-10 00:03:56.810 [INFO][4936] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:56.915770 containerd[1990]: 2025-07-10 00:03:56.813 [INFO][4936] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:56.915770 containerd[1990]: 2025-07-10 00:03:56.818 [INFO][4936] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:56.916661 containerd[1990]: 2025-07-10 00:03:56.819 [INFO][4936] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" host="ip-172-31-25-230" Jul 10 00:03:56.916661 containerd[1990]: 2025-07-10 00:03:56.823 [INFO][4936] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93 Jul 10 00:03:56.916661 containerd[1990]: 2025-07-10 00:03:56.846 [INFO][4936] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" host="ip-172-31-25-230" Jul 10 00:03:56.916661 containerd[1990]: 2025-07-10 00:03:56.863 [INFO][4936] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.195/26] block=192.168.100.192/26 handle="k8s-pod-network.b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" host="ip-172-31-25-230" Jul 10 00:03:56.916661 containerd[1990]: 2025-07-10 00:03:56.863 [INFO][4936] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.195/26] handle="k8s-pod-network.b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" host="ip-172-31-25-230" Jul 10 00:03:56.916661 containerd[1990]: 2025-07-10 00:03:56.863 [INFO][4936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:03:56.916661 containerd[1990]: 2025-07-10 00:03:56.863 [INFO][4936] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.195/26] IPv6=[] ContainerID="b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" HandleID="k8s-pod-network.b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" Workload="ip--172--31--25--230-k8s-calico--apiserver--94958988c--ktf4v-eth0" Jul 10 00:03:56.916962 containerd[1990]: 2025-07-10 00:03:56.866 [INFO][4909] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" Namespace="calico-apiserver" Pod="calico-apiserver-94958988c-ktf4v" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--apiserver--94958988c--ktf4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--230-k8s-calico--apiserver--94958988c--ktf4v-eth0", GenerateName:"calico-apiserver-94958988c-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc7c95fb-936f-48c1-97af-af78b053d354", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94958988c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-230", ContainerID:"", Pod:"calico-apiserver-94958988c-ktf4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88dc6811e27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:03:56.917099 containerd[1990]: 2025-07-10 00:03:56.866 [INFO][4909] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.195/32] ContainerID="b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" Namespace="calico-apiserver" Pod="calico-apiserver-94958988c-ktf4v" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--apiserver--94958988c--ktf4v-eth0" Jul 10 00:03:56.917099 containerd[1990]: 2025-07-10 00:03:56.867 [INFO][4909] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88dc6811e27 ContainerID="b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" Namespace="calico-apiserver" Pod="calico-apiserver-94958988c-ktf4v" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--apiserver--94958988c--ktf4v-eth0" Jul 10 00:03:56.917099 containerd[1990]: 2025-07-10 00:03:56.877 [INFO][4909] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" Namespace="calico-apiserver" Pod="calico-apiserver-94958988c-ktf4v" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--apiserver--94958988c--ktf4v-eth0" Jul 10 00:03:56.917565 containerd[1990]: 2025-07-10 00:03:56.878 [INFO][4909] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" Namespace="calico-apiserver" Pod="calico-apiserver-94958988c-ktf4v" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--apiserver--94958988c--ktf4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--230-k8s-calico--apiserver--94958988c--ktf4v-eth0", GenerateName:"calico-apiserver-94958988c-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc7c95fb-936f-48c1-97af-af78b053d354", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94958988c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-230", ContainerID:"b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93", Pod:"calico-apiserver-94958988c-ktf4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88dc6811e27", MAC:"92:14:1f:18:a1:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:03:56.918018 containerd[1990]: 2025-07-10 00:03:56.910 [INFO][4909] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" Namespace="calico-apiserver" Pod="calico-apiserver-94958988c-ktf4v" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--apiserver--94958988c--ktf4v-eth0" Jul 10 00:03:56.992590 containerd[1990]: time="2025-07-10T00:03:56.991957030Z" level=info msg="connecting to shim b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93" address="unix:///run/containerd/s/f0a45c990956fc6010635c45ba05a8d8cdd03de6932a7bc745378d284e9982c8" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:03:57.087371 systemd[1]: Started cri-containerd-b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93.scope - libcontainer container b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93. Jul 10 00:03:57.295519 containerd[1990]: time="2025-07-10T00:03:57.295377032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94958988c-ktf4v,Uid:bc7c95fb-936f-48c1-97af-af78b053d354,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93\"" Jul 10 00:03:57.377622 systemd-networkd[1885]: cali764f74568d8: Gained IPv6LL Jul 10 00:03:57.580158 containerd[1990]: time="2025-07-10T00:03:57.578824701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hwnm7,Uid:b8225626-c244-4702-a672-ad853272263e,Namespace:kube-system,Attempt:0,}" Jul 10 00:03:57.599410 containerd[1990]: time="2025-07-10T00:03:57.598672257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9944db5-x5s7z,Uid:6c47ff3b-ef7c-4494-b399-5a6a62047af4,Namespace:calico-system,Attempt:0,}" Jul 10 00:03:57.621777 containerd[1990]: time="2025-07-10T00:03:57.620448429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvfdh,Uid:63526049-3309-4f65-ad78-b95e459a7f01,Namespace:calico-system,Attempt:0,}" Jul 10 00:03:58.272043 systemd-networkd[1885]: cali4449551c331: Link UP Jul 10 00:03:58.275505 systemd-networkd[1885]: cali4449551c331: Gained carrier Jul 10 00:03:58.333710 containerd[1990]: 2025-07-10 00:03:58.031 [INFO][5050] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--230-k8s-csi--node--driver--cvfdh-eth0 csi-node-driver- calico-system 63526049-3309-4f65-ad78-b95e459a7f01 727 0 2025-07-10 00:03:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-25-230 csi-node-driver-cvfdh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4449551c331 [] [] }} ContainerID="b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" Namespace="calico-system" Pod="csi-node-driver-cvfdh" WorkloadEndpoint="ip--172--31--25--230-k8s-csi--node--driver--cvfdh-" Jul 10 00:03:58.333710 containerd[1990]: 2025-07-10 00:03:58.031 [INFO][5050] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" Namespace="calico-system" Pod="csi-node-driver-cvfdh" WorkloadEndpoint="ip--172--31--25--230-k8s-csi--node--driver--cvfdh-eth0" Jul 10 00:03:58.333710 containerd[1990]: 2025-07-10 00:03:58.142 [INFO][5087] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" HandleID="k8s-pod-network.b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" Workload="ip--172--31--25--230-k8s-csi--node--driver--cvfdh-eth0" Jul 10 00:03:58.334007 containerd[1990]: 2025-07-10 00:03:58.145 [INFO][5087] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" HandleID="k8s-pod-network.b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" Workload="ip--172--31--25--230-k8s-csi--node--driver--cvfdh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000341920), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-230", "pod":"csi-node-driver-cvfdh", "timestamp":"2025-07-10 00:03:58.142176572 +0000 UTC"}, Hostname:"ip-172-31-25-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:03:58.334007 containerd[1990]: 2025-07-10 00:03:58.145 [INFO][5087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:03:58.334007 containerd[1990]: 2025-07-10 00:03:58.145 [INFO][5087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:03:58.334007 containerd[1990]: 2025-07-10 00:03:58.145 [INFO][5087] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-230' Jul 10 00:03:58.334007 containerd[1990]: 2025-07-10 00:03:58.174 [INFO][5087] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" host="ip-172-31-25-230" Jul 10 00:03:58.334007 containerd[1990]: 2025-07-10 00:03:58.194 [INFO][5087] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-230" Jul 10 00:03:58.334007 containerd[1990]: 2025-07-10 00:03:58.211 [INFO][5087] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:58.334007 containerd[1990]: 2025-07-10 00:03:58.215 [INFO][5087] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:58.334007 containerd[1990]: 2025-07-10 00:03:58.221 [INFO][5087] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:58.334519 containerd[1990]: 2025-07-10 00:03:58.222 [INFO][5087] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" host="ip-172-31-25-230" Jul 10 00:03:58.334519 containerd[1990]: 2025-07-10 00:03:58.225 [INFO][5087] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3 Jul 10 00:03:58.334519 containerd[1990]: 2025-07-10 00:03:58.237 [INFO][5087] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" host="ip-172-31-25-230" Jul 10 00:03:58.334519 containerd[1990]: 2025-07-10 00:03:58.253 [INFO][5087] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.196/26] block=192.168.100.192/26 handle="k8s-pod-network.b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" host="ip-172-31-25-230" Jul 10 00:03:58.334519 containerd[1990]: 2025-07-10 00:03:58.255 [INFO][5087] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.196/26] handle="k8s-pod-network.b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" host="ip-172-31-25-230" Jul 10 00:03:58.334519 containerd[1990]: 2025-07-10 00:03:58.255 [INFO][5087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:03:58.334519 containerd[1990]: 2025-07-10 00:03:58.255 [INFO][5087] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.196/26] IPv6=[] ContainerID="b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" HandleID="k8s-pod-network.b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" Workload="ip--172--31--25--230-k8s-csi--node--driver--cvfdh-eth0" Jul 10 00:03:58.334833 containerd[1990]: 2025-07-10 00:03:58.263 [INFO][5050] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" Namespace="calico-system" Pod="csi-node-driver-cvfdh" WorkloadEndpoint="ip--172--31--25--230-k8s-csi--node--driver--cvfdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--230-k8s-csi--node--driver--cvfdh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"63526049-3309-4f65-ad78-b95e459a7f01", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-230", ContainerID:"", Pod:"csi-node-driver-cvfdh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4449551c331", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:03:58.334961 containerd[1990]: 2025-07-10 00:03:58.264 [INFO][5050] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.196/32] ContainerID="b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" Namespace="calico-system" Pod="csi-node-driver-cvfdh" WorkloadEndpoint="ip--172--31--25--230-k8s-csi--node--driver--cvfdh-eth0" Jul 10 00:03:58.334961 containerd[1990]: 2025-07-10 00:03:58.264 [INFO][5050] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4449551c331 ContainerID="b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" Namespace="calico-system" Pod="csi-node-driver-cvfdh" WorkloadEndpoint="ip--172--31--25--230-k8s-csi--node--driver--cvfdh-eth0" Jul 10 00:03:58.334961 containerd[1990]: 2025-07-10 00:03:58.271 [INFO][5050] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" Namespace="calico-system" Pod="csi-node-driver-cvfdh" WorkloadEndpoint="ip--172--31--25--230-k8s-csi--node--driver--cvfdh-eth0" Jul 10 00:03:58.335104 containerd[1990]: 2025-07-10 00:03:58.271 [INFO][5050] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" Namespace="calico-system" Pod="csi-node-driver-cvfdh" WorkloadEndpoint="ip--172--31--25--230-k8s-csi--node--driver--cvfdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--230-k8s-csi--node--driver--cvfdh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"63526049-3309-4f65-ad78-b95e459a7f01", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-230", ContainerID:"b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3", Pod:"csi-node-driver-cvfdh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4449551c331", MAC:"ee:23:08:12:58:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:03:58.335211 containerd[1990]: 2025-07-10 00:03:58.316 [INFO][5050] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" Namespace="calico-system" Pod="csi-node-driver-cvfdh" WorkloadEndpoint="ip--172--31--25--230-k8s-csi--node--driver--cvfdh-eth0" Jul 10 00:03:58.338006 systemd-networkd[1885]: vxlan.calico: Gained IPv6LL Jul 10 00:03:58.426351 containerd[1990]: time="2025-07-10T00:03:58.425983845Z" level=info msg="connecting to shim b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3" address="unix:///run/containerd/s/359df88032987234dd14bccccf87f01cb3ffc594b6f5201d746733cdde584d23" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:03:58.478083 systemd-networkd[1885]: cali1eb061aad16: Link UP Jul 10 00:03:58.480934 systemd-networkd[1885]: cali1eb061aad16: Gained carrier Jul 10 00:03:58.548316 systemd[1]: Started cri-containerd-b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3.scope - libcontainer container b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3. Jul 10 00:03:58.570304 containerd[1990]: time="2025-07-10T00:03:58.570257134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rs7g9,Uid:36ad65bb-9edb-4db0-9097-ab8516085854,Namespace:kube-system,Attempt:0,}" Jul 10 00:03:58.580777 containerd[1990]: 2025-07-10 00:03:57.923 [INFO][5045] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--230-k8s-calico--kube--controllers--7f9944db5--x5s7z-eth0 calico-kube-controllers-7f9944db5- calico-system 6c47ff3b-ef7c-4494-b399-5a6a62047af4 861 0 2025-07-10 00:03:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f9944db5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-25-230 calico-kube-controllers-7f9944db5-x5s7z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1eb061aad16 [] [] }} ContainerID="4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" Namespace="calico-system" Pod="calico-kube-controllers-7f9944db5-x5s7z" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--kube--controllers--7f9944db5--x5s7z-" Jul 10 00:03:58.580777 containerd[1990]: 2025-07-10 00:03:57.924 [INFO][5045] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" Namespace="calico-system" Pod="calico-kube-controllers-7f9944db5-x5s7z" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--kube--controllers--7f9944db5--x5s7z-eth0" Jul 10 00:03:58.580777 containerd[1990]: 2025-07-10 00:03:58.169 [INFO][5072] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" HandleID="k8s-pod-network.4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" Workload="ip--172--31--25--230-k8s-calico--kube--controllers--7f9944db5--x5s7z-eth0" Jul 10 00:03:58.582289 containerd[1990]: 2025-07-10 00:03:58.173 [INFO][5072] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" HandleID="k8s-pod-network.4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" Workload="ip--172--31--25--230-k8s-calico--kube--controllers--7f9944db5--x5s7z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000343e90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-230", "pod":"calico-kube-controllers-7f9944db5-x5s7z", "timestamp":"2025-07-10 00:03:58.168862316 +0000 UTC"}, Hostname:"ip-172-31-25-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:03:58.582289 containerd[1990]: 2025-07-10 00:03:58.173 [INFO][5072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:03:58.582289 containerd[1990]: 2025-07-10 00:03:58.255 [INFO][5072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:03:58.582289 containerd[1990]: 2025-07-10 00:03:58.255 [INFO][5072] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-230' Jul 10 00:03:58.582289 containerd[1990]: 2025-07-10 00:03:58.312 [INFO][5072] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" host="ip-172-31-25-230" Jul 10 00:03:58.582289 containerd[1990]: 2025-07-10 00:03:58.341 [INFO][5072] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-230" Jul 10 00:03:58.582289 containerd[1990]: 2025-07-10 00:03:58.368 [INFO][5072] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:58.582289 containerd[1990]: 2025-07-10 00:03:58.379 [INFO][5072] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:58.582289 containerd[1990]: 2025-07-10 00:03:58.393 [INFO][5072] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:58.583323 containerd[1990]: 2025-07-10 00:03:58.393 [INFO][5072] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" host="ip-172-31-25-230" Jul 10 00:03:58.583323 containerd[1990]: 2025-07-10 00:03:58.400 [INFO][5072] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14 Jul 10 00:03:58.583323 containerd[1990]: 2025-07-10 00:03:58.416 [INFO][5072] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" host="ip-172-31-25-230" Jul 10 00:03:58.583323 containerd[1990]: 2025-07-10 00:03:58.440 [INFO][5072] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.197/26] block=192.168.100.192/26 handle="k8s-pod-network.4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" host="ip-172-31-25-230" Jul 10 00:03:58.583323 containerd[1990]: 2025-07-10 00:03:58.441 [INFO][5072] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.197/26] handle="k8s-pod-network.4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" host="ip-172-31-25-230" Jul 10 00:03:58.583323 containerd[1990]: 2025-07-10 00:03:58.442 [INFO][5072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:03:58.583323 containerd[1990]: 2025-07-10 00:03:58.442 [INFO][5072] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.197/26] IPv6=[] ContainerID="4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" HandleID="k8s-pod-network.4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" Workload="ip--172--31--25--230-k8s-calico--kube--controllers--7f9944db5--x5s7z-eth0" Jul 10 00:03:58.585211 containerd[1990]: 2025-07-10 00:03:58.459 [INFO][5045] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" Namespace="calico-system" Pod="calico-kube-controllers-7f9944db5-x5s7z" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--kube--controllers--7f9944db5--x5s7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--230-k8s-calico--kube--controllers--7f9944db5--x5s7z-eth0", GenerateName:"calico-kube-controllers-7f9944db5-", Namespace:"calico-system", SelfLink:"", UID:"6c47ff3b-ef7c-4494-b399-5a6a62047af4", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f9944db5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-230", ContainerID:"", Pod:"calico-kube-controllers-7f9944db5-x5s7z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1eb061aad16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:03:58.585497 containerd[1990]: 2025-07-10 00:03:58.461 [INFO][5045] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.197/32] ContainerID="4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" Namespace="calico-system" Pod="calico-kube-controllers-7f9944db5-x5s7z" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--kube--controllers--7f9944db5--x5s7z-eth0" Jul 10 00:03:58.585497 containerd[1990]: 2025-07-10 00:03:58.461 [INFO][5045] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1eb061aad16 ContainerID="4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" Namespace="calico-system" Pod="calico-kube-controllers-7f9944db5-x5s7z" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--kube--controllers--7f9944db5--x5s7z-eth0" Jul 10 00:03:58.585497 containerd[1990]: 2025-07-10 00:03:58.483 [INFO][5045] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" Namespace="calico-system" Pod="calico-kube-controllers-7f9944db5-x5s7z" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--kube--controllers--7f9944db5--x5s7z-eth0" Jul 10 00:03:58.585669 containerd[1990]: 2025-07-10 00:03:58.491 [INFO][5045] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" Namespace="calico-system" Pod="calico-kube-controllers-7f9944db5-x5s7z" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--kube--controllers--7f9944db5--x5s7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--230-k8s-calico--kube--controllers--7f9944db5--x5s7z-eth0", GenerateName:"calico-kube-controllers-7f9944db5-", Namespace:"calico-system", SelfLink:"", UID:"6c47ff3b-ef7c-4494-b399-5a6a62047af4", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f9944db5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-230", ContainerID:"4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14", Pod:"calico-kube-controllers-7f9944db5-x5s7z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1eb061aad16", MAC:"16:06:fb:0c:86:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:03:58.585793 containerd[1990]: 2025-07-10 00:03:58.538 [INFO][5045] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" Namespace="calico-system" Pod="calico-kube-controllers-7f9944db5-x5s7z" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--kube--controllers--7f9944db5--x5s7z-eth0" Jul 10 00:03:58.593638 systemd-networkd[1885]: cali88dc6811e27: Gained IPv6LL Jul 10 00:03:58.685611 systemd-networkd[1885]: cali21227a8ace5: Link UP Jul 10 00:03:58.698498 systemd-networkd[1885]: cali21227a8ace5: Gained carrier Jul 10 00:03:58.713466 containerd[1990]: time="2025-07-10T00:03:58.713275655Z" level=info msg="connecting to shim 4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14" address="unix:///run/containerd/s/99ca1c6f4e6b14eac02a3c8d24a0af667931d6045ba80303d2fbde9810cdf8ba" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:03:58.767014 containerd[1990]: 2025-07-10 00:03:57.989 [INFO][5033] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--230-k8s-coredns--668d6bf9bc--hwnm7-eth0 coredns-668d6bf9bc- kube-system b8225626-c244-4702-a672-ad853272263e 853 0 2025-07-10 00:03:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-230 coredns-668d6bf9bc-hwnm7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali21227a8ace5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" Namespace="kube-system" Pod="coredns-668d6bf9bc-hwnm7" WorkloadEndpoint="ip--172--31--25--230-k8s-coredns--668d6bf9bc--hwnm7-" Jul 10 00:03:58.767014 containerd[1990]: 2025-07-10 00:03:57.990 [INFO][5033] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" Namespace="kube-system" Pod="coredns-668d6bf9bc-hwnm7" WorkloadEndpoint="ip--172--31--25--230-k8s-coredns--668d6bf9bc--hwnm7-eth0" Jul 10 00:03:58.767014 containerd[1990]: 2025-07-10 00:03:58.198 [INFO][5080] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" HandleID="k8s-pod-network.2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" Workload="ip--172--31--25--230-k8s-coredns--668d6bf9bc--hwnm7-eth0" Jul 10 00:03:58.767305 containerd[1990]: 2025-07-10 00:03:58.198 [INFO][5080] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" HandleID="k8s-pod-network.2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" Workload="ip--172--31--25--230-k8s-coredns--668d6bf9bc--hwnm7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000191d00), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-230", "pod":"coredns-668d6bf9bc-hwnm7", "timestamp":"2025-07-10 00:03:58.198108728 +0000 UTC"}, Hostname:"ip-172-31-25-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:03:58.767305 containerd[1990]: 2025-07-10 00:03:58.198 [INFO][5080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:03:58.767305 containerd[1990]: 2025-07-10 00:03:58.442 [INFO][5080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:03:58.767305 containerd[1990]: 2025-07-10 00:03:58.443 [INFO][5080] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-230' Jul 10 00:03:58.767305 containerd[1990]: 2025-07-10 00:03:58.487 [INFO][5080] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" host="ip-172-31-25-230" Jul 10 00:03:58.767305 containerd[1990]: 2025-07-10 00:03:58.509 [INFO][5080] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-230" Jul 10 00:03:58.767305 containerd[1990]: 2025-07-10 00:03:58.538 [INFO][5080] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:58.767305 containerd[1990]: 2025-07-10 00:03:58.566 [INFO][5080] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:58.767305 containerd[1990]: 2025-07-10 00:03:58.581 [INFO][5080] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:58.770549 containerd[1990]: 2025-07-10 00:03:58.582 [INFO][5080] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" host="ip-172-31-25-230" Jul 10 00:03:58.770549 containerd[1990]: 2025-07-10 00:03:58.586 [INFO][5080] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf Jul 10 00:03:58.770549 containerd[1990]: 2025-07-10 00:03:58.617 [INFO][5080] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" host="ip-172-31-25-230" Jul 10 00:03:58.770549 containerd[1990]: 2025-07-10 00:03:58.638 [INFO][5080] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.198/26] block=192.168.100.192/26 handle="k8s-pod-network.2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" host="ip-172-31-25-230" Jul 10 00:03:58.770549 containerd[1990]: 2025-07-10 00:03:58.639 [INFO][5080] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.198/26] handle="k8s-pod-network.2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" host="ip-172-31-25-230" Jul 10 00:03:58.770549 containerd[1990]: 2025-07-10 00:03:58.640 [INFO][5080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:03:58.770549 containerd[1990]: 2025-07-10 00:03:58.640 [INFO][5080] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.198/26] IPv6=[] ContainerID="2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" HandleID="k8s-pod-network.2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" Workload="ip--172--31--25--230-k8s-coredns--668d6bf9bc--hwnm7-eth0" Jul 10 00:03:58.770931 containerd[1990]: 2025-07-10 00:03:58.658 [INFO][5033] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" Namespace="kube-system" Pod="coredns-668d6bf9bc-hwnm7" WorkloadEndpoint="ip--172--31--25--230-k8s-coredns--668d6bf9bc--hwnm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--230-k8s-coredns--668d6bf9bc--hwnm7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b8225626-c244-4702-a672-ad853272263e", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-230", ContainerID:"", Pod:"coredns-668d6bf9bc-hwnm7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21227a8ace5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:03:58.770931 containerd[1990]: 2025-07-10 00:03:58.659 [INFO][5033] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.198/32] ContainerID="2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" Namespace="kube-system" Pod="coredns-668d6bf9bc-hwnm7" WorkloadEndpoint="ip--172--31--25--230-k8s-coredns--668d6bf9bc--hwnm7-eth0" Jul 10 00:03:58.770931 containerd[1990]: 2025-07-10 00:03:58.659 [INFO][5033] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21227a8ace5 ContainerID="2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" Namespace="kube-system" Pod="coredns-668d6bf9bc-hwnm7" WorkloadEndpoint="ip--172--31--25--230-k8s-coredns--668d6bf9bc--hwnm7-eth0" Jul 10 00:03:58.770931 containerd[1990]: 2025-07-10 00:03:58.705 [INFO][5033] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" Namespace="kube-system" Pod="coredns-668d6bf9bc-hwnm7" WorkloadEndpoint="ip--172--31--25--230-k8s-coredns--668d6bf9bc--hwnm7-eth0" Jul 10 00:03:58.770931 containerd[1990]: 2025-07-10 00:03:58.706 [INFO][5033] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" Namespace="kube-system" Pod="coredns-668d6bf9bc-hwnm7" WorkloadEndpoint="ip--172--31--25--230-k8s-coredns--668d6bf9bc--hwnm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--230-k8s-coredns--668d6bf9bc--hwnm7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b8225626-c244-4702-a672-ad853272263e", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-230", ContainerID:"2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf", Pod:"coredns-668d6bf9bc-hwnm7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21227a8ace5", MAC:"be:2f:de:d8:38:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:03:58.770931 containerd[1990]: 2025-07-10 00:03:58.745 [INFO][5033] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" Namespace="kube-system" Pod="coredns-668d6bf9bc-hwnm7" WorkloadEndpoint="ip--172--31--25--230-k8s-coredns--668d6bf9bc--hwnm7-eth0" Jul 10 00:03:58.901236 systemd[1]: Started cri-containerd-4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14.scope - libcontainer container 4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14. Jul 10 00:03:58.956092 containerd[1990]: time="2025-07-10T00:03:58.955376424Z" level=info msg="connecting to shim 2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf" address="unix:///run/containerd/s/216ea27384b81fc67e23da566e001dd482759ef0e01ddb6f41f3c10d4e465899" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:03:58.971432 containerd[1990]: time="2025-07-10T00:03:58.970984464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvfdh,Uid:63526049-3309-4f65-ad78-b95e459a7f01,Namespace:calico-system,Attempt:0,} returns sandbox id \"b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3\"" Jul 10 00:03:59.222710 systemd[1]: Started cri-containerd-2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf.scope - libcontainer container 2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf. Jul 10 00:03:59.493838 containerd[1990]: time="2025-07-10T00:03:59.493016483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9944db5-x5s7z,Uid:6c47ff3b-ef7c-4494-b399-5a6a62047af4,Namespace:calico-system,Attempt:0,} returns sandbox id \"4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14\"" Jul 10 00:03:59.528985 systemd-networkd[1885]: califa2bf2593a0: Link UP Jul 10 00:03:59.535581 systemd-networkd[1885]: califa2bf2593a0: Gained carrier Jul 10 00:03:59.544783 containerd[1990]: time="2025-07-10T00:03:59.544684943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hwnm7,Uid:b8225626-c244-4702-a672-ad853272263e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf\"" Jul 10 00:03:59.552623 systemd-networkd[1885]: cali1eb061aad16: Gained IPv6LL Jul 10 00:03:59.573041 containerd[1990]: time="2025-07-10T00:03:59.572955191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94958988c-c7snx,Uid:17db8b91-9713-4ad3-8e2a-e7a1b996f01d,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:03:59.573596 containerd[1990]: time="2025-07-10T00:03:59.573522191Z" level=info msg="CreateContainer within sandbox \"2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:58.976 [INFO][5158] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--230-k8s-coredns--668d6bf9bc--rs7g9-eth0 coredns-668d6bf9bc- kube-system 36ad65bb-9edb-4db0-9097-ab8516085854 866 0 2025-07-10 00:03:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-230 coredns-668d6bf9bc-rs7g9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califa2bf2593a0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs7g9" WorkloadEndpoint="ip--172--31--25--230-k8s-coredns--668d6bf9bc--rs7g9-" Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:58.976 [INFO][5158] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs7g9" WorkloadEndpoint="ip--172--31--25--230-k8s-coredns--668d6bf9bc--rs7g9-eth0" Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.269 [INFO][5249] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" HandleID="k8s-pod-network.983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" Workload="ip--172--31--25--230-k8s-coredns--668d6bf9bc--rs7g9-eth0" Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.272 [INFO][5249] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" HandleID="k8s-pod-network.983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" Workload="ip--172--31--25--230-k8s-coredns--668d6bf9bc--rs7g9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3620), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-230", "pod":"coredns-668d6bf9bc-rs7g9", "timestamp":"2025-07-10 00:03:59.26929825 +0000 UTC"}, Hostname:"ip-172-31-25-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.272 [INFO][5249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.272 [INFO][5249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.274 [INFO][5249] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-230' Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.312 [INFO][5249] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" host="ip-172-31-25-230" Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.329 [INFO][5249] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-230" Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.358 [INFO][5249] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.368 [INFO][5249] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.392 [INFO][5249] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.394 [INFO][5249] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" host="ip-172-31-25-230" Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.411 [INFO][5249] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2 Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.429 [INFO][5249] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" host="ip-172-31-25-230" Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.469 [INFO][5249] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.199/26] block=192.168.100.192/26 handle="k8s-pod-network.983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" host="ip-172-31-25-230" Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.473 [INFO][5249] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.199/26] handle="k8s-pod-network.983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" host="ip-172-31-25-230" Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.474 [INFO][5249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:03:59.596834 containerd[1990]: 2025-07-10 00:03:59.476 [INFO][5249] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.199/26] IPv6=[] ContainerID="983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" HandleID="k8s-pod-network.983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" Workload="ip--172--31--25--230-k8s-coredns--668d6bf9bc--rs7g9-eth0" Jul 10 00:03:59.598268 containerd[1990]: 2025-07-10 00:03:59.495 [INFO][5158] cni-plugin/k8s.go 418: Populated endpoint ContainerID="983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs7g9" WorkloadEndpoint="ip--172--31--25--230-k8s-coredns--668d6bf9bc--rs7g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--230-k8s-coredns--668d6bf9bc--rs7g9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"36ad65bb-9edb-4db0-9097-ab8516085854", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-230", ContainerID:"", Pod:"coredns-668d6bf9bc-rs7g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa2bf2593a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:03:59.598268 containerd[1990]: 2025-07-10 00:03:59.496 [INFO][5158] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.199/32] ContainerID="983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs7g9" WorkloadEndpoint="ip--172--31--25--230-k8s-coredns--668d6bf9bc--rs7g9-eth0" Jul 10 00:03:59.598268 containerd[1990]: 2025-07-10 00:03:59.496 [INFO][5158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa2bf2593a0 ContainerID="983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs7g9" WorkloadEndpoint="ip--172--31--25--230-k8s-coredns--668d6bf9bc--rs7g9-eth0" Jul 10 00:03:59.598268 containerd[1990]: 2025-07-10 00:03:59.542 [INFO][5158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs7g9" WorkloadEndpoint="ip--172--31--25--230-k8s-coredns--668d6bf9bc--rs7g9-eth0" Jul 10 00:03:59.598268 containerd[1990]: 2025-07-10 00:03:59.544 [INFO][5158] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs7g9" WorkloadEndpoint="ip--172--31--25--230-k8s-coredns--668d6bf9bc--rs7g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--230-k8s-coredns--668d6bf9bc--rs7g9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"36ad65bb-9edb-4db0-9097-ab8516085854", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-230", ContainerID:"983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2", Pod:"coredns-668d6bf9bc-rs7g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa2bf2593a0", MAC:"ae:54:5f:5b:e9:78", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:03:59.598268 containerd[1990]: 2025-07-10 00:03:59.580 [INFO][5158] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs7g9" WorkloadEndpoint="ip--172--31--25--230-k8s-coredns--668d6bf9bc--rs7g9-eth0" Jul 10 00:03:59.669350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2783231838.mount: Deactivated successfully. Jul 10 00:03:59.671314 containerd[1990]: time="2025-07-10T00:03:59.671220756Z" level=info msg="Container 4b36e0f6e0a643e032f2f214ea889c857e9a90dd93c377aa2d9ff4a7e309404b: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:03:59.681208 systemd-networkd[1885]: cali4449551c331: Gained IPv6LL Jul 10 00:03:59.722158 containerd[1990]: time="2025-07-10T00:03:59.721984704Z" level=info msg="CreateContainer within sandbox \"2f28ff2f5dad352ab5073b28a666ecf0065a98f9c7a13494a158ef07470e3cbf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b36e0f6e0a643e032f2f214ea889c857e9a90dd93c377aa2d9ff4a7e309404b\"" Jul 10 00:03:59.723816 containerd[1990]: time="2025-07-10T00:03:59.723770460Z" level=info msg="StartContainer for \"4b36e0f6e0a643e032f2f214ea889c857e9a90dd93c377aa2d9ff4a7e309404b\"" Jul 10 00:03:59.728528 containerd[1990]: time="2025-07-10T00:03:59.728277972Z" level=info msg="connecting to shim 4b36e0f6e0a643e032f2f214ea889c857e9a90dd93c377aa2d9ff4a7e309404b" address="unix:///run/containerd/s/216ea27384b81fc67e23da566e001dd482759ef0e01ddb6f41f3c10d4e465899" protocol=ttrpc version=3 Jul 10 00:03:59.763526 containerd[1990]: time="2025-07-10T00:03:59.763463232Z" level=info msg="connecting to shim 983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2" address="unix:///run/containerd/s/18fd1dcc5465a92e0f3223e09d3a9ff9b71f777a01473a637bd2459b69db4cfe" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:03:59.808007 systemd[1]: Started cri-containerd-4b36e0f6e0a643e032f2f214ea889c857e9a90dd93c377aa2d9ff4a7e309404b.scope - libcontainer container 4b36e0f6e0a643e032f2f214ea889c857e9a90dd93c377aa2d9ff4a7e309404b. Jul 10 00:03:59.907763 systemd[1]: Started cri-containerd-983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2.scope - libcontainer container 983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2. Jul 10 00:04:00.028447 containerd[1990]: time="2025-07-10T00:04:00.027568809Z" level=info msg="StartContainer for \"4b36e0f6e0a643e032f2f214ea889c857e9a90dd93c377aa2d9ff4a7e309404b\" returns successfully" Jul 10 00:04:00.179509 containerd[1990]: time="2025-07-10T00:04:00.179441590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rs7g9,Uid:36ad65bb-9edb-4db0-9097-ab8516085854,Namespace:kube-system,Attempt:0,} returns sandbox id \"983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2\"" Jul 10 00:04:00.195637 containerd[1990]: time="2025-07-10T00:04:00.195589762Z" level=info msg="CreateContainer within sandbox \"983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:04:00.208040 kubelet[3298]: I0710 00:04:00.206212 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hwnm7" podStartSLOduration=54.206186914 podStartE2EDuration="54.206186914s" podCreationTimestamp="2025-07-10 00:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:04:00.202912846 +0000 UTC m=+59.851960570" watchObservedRunningTime="2025-07-10 00:04:00.206186914 +0000 UTC m=+59.855234638" Jul 10 00:04:00.325436 containerd[1990]: time="2025-07-10T00:04:00.324943271Z" level=info msg="Container c6e84c87a55bd985dc8ed35455b24ea75c3492bee8a262cfb1821090c7ff2102: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:04:00.365688 systemd-networkd[1885]: calib4d39e820e2: Link UP Jul 10 00:04:00.370017 systemd-networkd[1885]: calib4d39e820e2: Gained carrier Jul 10 00:04:00.377471 containerd[1990]: time="2025-07-10T00:04:00.375733823Z" level=info msg="CreateContainer within sandbox \"983634098e37ff37cc95b4a29c1eaddcb994b08621aede53aa83ead4402340f2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c6e84c87a55bd985dc8ed35455b24ea75c3492bee8a262cfb1821090c7ff2102\"" Jul 10 00:04:00.392097 containerd[1990]: time="2025-07-10T00:04:00.391214195Z" level=info msg="StartContainer for \"c6e84c87a55bd985dc8ed35455b24ea75c3492bee8a262cfb1821090c7ff2102\"" Jul 10 00:04:00.401320 containerd[1990]: time="2025-07-10T00:04:00.401214299Z" level=info msg="connecting to shim c6e84c87a55bd985dc8ed35455b24ea75c3492bee8a262cfb1821090c7ff2102" address="unix:///run/containerd/s/18fd1dcc5465a92e0f3223e09d3a9ff9b71f777a01473a637bd2459b69db4cfe" protocol=ttrpc version=3 Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:03:59.891 [INFO][5292] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--230-k8s-calico--apiserver--94958988c--c7snx-eth0 calico-apiserver-94958988c- calico-apiserver 17db8b91-9713-4ad3-8e2a-e7a1b996f01d 865 0 2025-07-10 00:03:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:94958988c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-230 calico-apiserver-94958988c-c7snx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib4d39e820e2 [] [] }} ContainerID="7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" Namespace="calico-apiserver" Pod="calico-apiserver-94958988c-c7snx" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--apiserver--94958988c--c7snx-" Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:03:59.894 [INFO][5292] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" Namespace="calico-apiserver" Pod="calico-apiserver-94958988c-c7snx" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--apiserver--94958988c--c7snx-eth0" Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.127 [INFO][5367] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" HandleID="k8s-pod-network.7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" Workload="ip--172--31--25--230-k8s-calico--apiserver--94958988c--c7snx-eth0" Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.129 [INFO][5367] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" HandleID="k8s-pod-network.7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" Workload="ip--172--31--25--230-k8s-calico--apiserver--94958988c--c7snx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b7280), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-230", "pod":"calico-apiserver-94958988c-c7snx", "timestamp":"2025-07-10 00:04:00.127308766 +0000 UTC"}, Hostname:"ip-172-31-25-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.130 [INFO][5367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.130 [INFO][5367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.130 [INFO][5367] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-230' Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.201 [INFO][5367] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" host="ip-172-31-25-230" Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.249 [INFO][5367] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-230" Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.267 [INFO][5367] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.274 [INFO][5367] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.288 [INFO][5367] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="ip-172-31-25-230" Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.288 [INFO][5367] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" host="ip-172-31-25-230" Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.295 [INFO][5367] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1 Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.307 [INFO][5367] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" host="ip-172-31-25-230" Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.329 [INFO][5367] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.200/26] block=192.168.100.192/26 handle="k8s-pod-network.7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" host="ip-172-31-25-230" Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.330 [INFO][5367] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.200/26] handle="k8s-pod-network.7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" host="ip-172-31-25-230" Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.330 [INFO][5367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:04:00.458625 containerd[1990]: 2025-07-10 00:04:00.330 [INFO][5367] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.200/26] IPv6=[] ContainerID="7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" HandleID="k8s-pod-network.7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" Workload="ip--172--31--25--230-k8s-calico--apiserver--94958988c--c7snx-eth0" Jul 10 00:04:00.461333 containerd[1990]: 2025-07-10 00:04:00.344 [INFO][5292] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" Namespace="calico-apiserver" Pod="calico-apiserver-94958988c-c7snx" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--apiserver--94958988c--c7snx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--230-k8s-calico--apiserver--94958988c--c7snx-eth0", GenerateName:"calico-apiserver-94958988c-", Namespace:"calico-apiserver", SelfLink:"", UID:"17db8b91-9713-4ad3-8e2a-e7a1b996f01d", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94958988c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-230", ContainerID:"", Pod:"calico-apiserver-94958988c-c7snx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4d39e820e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:04:00.461333 containerd[1990]: 2025-07-10 00:04:00.345 [INFO][5292] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.200/32] ContainerID="7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" Namespace="calico-apiserver" Pod="calico-apiserver-94958988c-c7snx" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--apiserver--94958988c--c7snx-eth0" Jul 10 00:04:00.461333 containerd[1990]: 2025-07-10 00:04:00.346 [INFO][5292] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4d39e820e2 ContainerID="7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" Namespace="calico-apiserver" Pod="calico-apiserver-94958988c-c7snx" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--apiserver--94958988c--c7snx-eth0" Jul 10 00:04:00.461333 containerd[1990]: 2025-07-10 00:04:00.393 [INFO][5292] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" Namespace="calico-apiserver" Pod="calico-apiserver-94958988c-c7snx" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--apiserver--94958988c--c7snx-eth0" Jul 10 00:04:00.461333 containerd[1990]: 2025-07-10 00:04:00.396 [INFO][5292] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" Namespace="calico-apiserver" Pod="calico-apiserver-94958988c-c7snx" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--apiserver--94958988c--c7snx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--230-k8s-calico--apiserver--94958988c--c7snx-eth0", GenerateName:"calico-apiserver-94958988c-", Namespace:"calico-apiserver", SelfLink:"", UID:"17db8b91-9713-4ad3-8e2a-e7a1b996f01d", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94958988c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-230", ContainerID:"7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1", Pod:"calico-apiserver-94958988c-c7snx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4d39e820e2", MAC:"1a:32:ec:9b:9e:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:04:00.461333 containerd[1990]: 2025-07-10 00:04:00.441 [INFO][5292] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" Namespace="calico-apiserver" Pod="calico-apiserver-94958988c-c7snx" WorkloadEndpoint="ip--172--31--25--230-k8s-calico--apiserver--94958988c--c7snx-eth0" Jul 10 00:04:00.510742 systemd[1]: Started cri-containerd-c6e84c87a55bd985dc8ed35455b24ea75c3492bee8a262cfb1821090c7ff2102.scope - libcontainer container c6e84c87a55bd985dc8ed35455b24ea75c3492bee8a262cfb1821090c7ff2102. Jul 10 00:04:00.654035 containerd[1990]: time="2025-07-10T00:04:00.653854572Z" level=info msg="connecting to shim 7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1" address="unix:///run/containerd/s/7b64091d13d8751fa33a6405b7d7692b01f01004b7de8e1e8e0abac6cc8d808d" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:04:00.705728 systemd-networkd[1885]: cali21227a8ace5: Gained IPv6LL Jul 10 00:04:00.764844 containerd[1990]: time="2025-07-10T00:04:00.764778481Z" level=info msg="StartContainer for \"c6e84c87a55bd985dc8ed35455b24ea75c3492bee8a262cfb1821090c7ff2102\" returns successfully" Jul 10 00:04:00.819107 systemd[1]: Started cri-containerd-7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1.scope - libcontainer container 7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1. Jul 10 00:04:01.203909 containerd[1990]: time="2025-07-10T00:04:01.203607359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94958988c-c7snx,Uid:17db8b91-9713-4ad3-8e2a-e7a1b996f01d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1\"" Jul 10 00:04:01.280467 kubelet[3298]: I0710 00:04:01.280034 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rs7g9" podStartSLOduration=55.28000992 podStartE2EDuration="55.28000992s" podCreationTimestamp="2025-07-10 00:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:04:01.223197431 +0000 UTC m=+60.872245155" watchObservedRunningTime="2025-07-10 00:04:01.28000992 +0000 UTC m=+60.929057644" Jul 10 00:04:01.387641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95012066.mount: Deactivated successfully. Jul 10 00:04:01.427899 containerd[1990]: time="2025-07-10T00:04:01.427823412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:01.430790 containerd[1990]: time="2025-07-10T00:04:01.430709124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 10 00:04:01.433078 containerd[1990]: time="2025-07-10T00:04:01.432823668Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:01.439580 containerd[1990]: time="2025-07-10T00:04:01.439485468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:01.441135 containerd[1990]: time="2025-07-10T00:04:01.440909796Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 5.268615722s" Jul 10 00:04:01.441135 containerd[1990]: time="2025-07-10T00:04:01.440971068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 10 00:04:01.443409 containerd[1990]: time="2025-07-10T00:04:01.443271912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 00:04:01.448728 containerd[1990]: time="2025-07-10T00:04:01.448023684Z" level=info msg="CreateContainer within sandbox \"5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 00:04:01.465872 containerd[1990]: time="2025-07-10T00:04:01.465272113Z" level=info msg="Container 5c2fef500daa34f7f6affd634bb313b0a726d4065c67f6ad53997f7956d4a160: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:04:01.488469 containerd[1990]: time="2025-07-10T00:04:01.488417221Z" level=info msg="CreateContainer within sandbox \"5167771177a25f4dae5354b95d25af25e4716379998484bfc42c089d21ad6583\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5c2fef500daa34f7f6affd634bb313b0a726d4065c67f6ad53997f7956d4a160\"" Jul 10 00:04:01.489546 containerd[1990]: time="2025-07-10T00:04:01.489502957Z" level=info msg="StartContainer for \"5c2fef500daa34f7f6affd634bb313b0a726d4065c67f6ad53997f7956d4a160\"" Jul 10 00:04:01.492368 containerd[1990]: time="2025-07-10T00:04:01.492240313Z" level=info msg="connecting to shim 5c2fef500daa34f7f6affd634bb313b0a726d4065c67f6ad53997f7956d4a160" address="unix:///run/containerd/s/1319406ee65e95b0bd1e21a817369723b1ea4ce35e200d392f78bc0137a94558" protocol=ttrpc version=3 Jul 10 00:04:01.530707 systemd[1]: Started cri-containerd-5c2fef500daa34f7f6affd634bb313b0a726d4065c67f6ad53997f7956d4a160.scope - libcontainer container 5c2fef500daa34f7f6affd634bb313b0a726d4065c67f6ad53997f7956d4a160. Jul 10 00:04:01.537488 systemd-networkd[1885]: califa2bf2593a0: Gained IPv6LL Jul 10 00:04:01.640336 containerd[1990]: time="2025-07-10T00:04:01.640216069Z" level=info msg="StartContainer for \"5c2fef500daa34f7f6affd634bb313b0a726d4065c67f6ad53997f7956d4a160\" returns successfully" Jul 10 00:04:02.227886 kubelet[3298]: I0710 00:04:02.227239 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-64c668f858-g2g5j" podStartSLOduration=1.936820976 podStartE2EDuration="9.22721544s" podCreationTimestamp="2025-07-10 00:03:53 +0000 UTC" firstStartedPulling="2025-07-10 00:03:54.152479384 +0000 UTC m=+53.801527096" lastFinishedPulling="2025-07-10 00:04:01.442873836 +0000 UTC m=+61.091921560" observedRunningTime="2025-07-10 00:04:02.19975014 +0000 UTC m=+61.848797840" watchObservedRunningTime="2025-07-10 00:04:02.22721544 +0000 UTC m=+61.876263152" Jul 10 00:04:02.368860 systemd-networkd[1885]: calib4d39e820e2: Gained IPv6LL Jul 10 00:04:03.375138 systemd[1]: Started sshd@7-172.31.25.230:22-139.178.89.65:40096.service - OpenSSH per-connection server daemon (139.178.89.65:40096). Jul 10 00:04:03.589113 sshd[5552]: Accepted publickey for core from 139.178.89.65 port 40096 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:04:03.593138 sshd-session[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:04:03.603043 systemd-logind[1981]: New session 8 of user core. Jul 10 00:04:03.611653 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:04:03.927527 sshd[5554]: Connection closed by 139.178.89.65 port 40096 Jul 10 00:04:03.928709 sshd-session[5552]: pam_unix(sshd:session): session closed for user core Jul 10 00:04:03.935902 systemd[1]: sshd@7-172.31.25.230:22-139.178.89.65:40096.service: Deactivated successfully. Jul 10 00:04:03.941268 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:04:03.943509 systemd-logind[1981]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:04:03.947496 systemd-logind[1981]: Removed session 8. Jul 10 00:04:05.191359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3165482063.mount: Deactivated successfully. Jul 10 00:04:05.235142 ntpd[1974]: Listen normally on 8 vxlan.calico 192.168.100.192:123 Jul 10 00:04:05.235365 ntpd[1974]: Listen normally on 9 cali601ec9fff03 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 10 00:04:05.235763 ntpd[1974]: 10 Jul 00:04:05 ntpd[1974]: Listen normally on 8 vxlan.calico 192.168.100.192:123 Jul 10 00:04:05.235763 ntpd[1974]: 10 Jul 00:04:05 ntpd[1974]: Listen normally on 9 cali601ec9fff03 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 10 00:04:05.235763 ntpd[1974]: 10 Jul 00:04:05 ntpd[1974]: Listen normally on 10 cali764f74568d8 [fe80::ecee:eeff:feee:eeee%5]:123 Jul 10 00:04:05.235481 ntpd[1974]: Listen normally on 10 cali764f74568d8 [fe80::ecee:eeff:feee:eeee%5]:123 Jul 10 00:04:05.235964 ntpd[1974]: 10 Jul 00:04:05 ntpd[1974]: Listen normally on 11 vxlan.calico [fe80::64cd:acff:feeb:8fbc%6]:123 Jul 10 00:04:05.235964 ntpd[1974]: 10 Jul 00:04:05 ntpd[1974]: Listen normally on 12 cali88dc6811e27 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 10 00:04:05.235867 ntpd[1974]: Listen normally on 11 vxlan.calico [fe80::64cd:acff:feeb:8fbc%6]:123 Jul 10 00:04:05.236112 ntpd[1974]: 10 Jul 00:04:05 ntpd[1974]: Listen normally on 13 cali4449551c331 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 10 00:04:05.236112 ntpd[1974]: 10 Jul 00:04:05 ntpd[1974]: Listen normally on 14 cali1eb061aad16 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 10 00:04:05.235954 ntpd[1974]: Listen normally on 12 cali88dc6811e27 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 10 00:04:05.236252 ntpd[1974]: 10 Jul 00:04:05 ntpd[1974]: Listen normally on 15 cali21227a8ace5 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 10 00:04:05.236252 ntpd[1974]: 10 Jul 00:04:05 ntpd[1974]: Listen normally on 16 califa2bf2593a0 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 10 00:04:05.236020 ntpd[1974]: Listen normally on 13 cali4449551c331 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 10 00:04:05.237900 ntpd[1974]: 10 Jul 00:04:05 ntpd[1974]: Listen normally on 17 calib4d39e820e2 [fe80::ecee:eeff:feee:eeee%14]:123 Jul 10 00:04:05.236085 ntpd[1974]: Listen normally on 14 cali1eb061aad16 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 10 00:04:05.236148 ntpd[1974]: Listen normally on 15 cali21227a8ace5 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 10 00:04:05.236211 ntpd[1974]: Listen normally on 16 califa2bf2593a0 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 10 00:04:05.236274 ntpd[1974]: Listen normally on 17 calib4d39e820e2 [fe80::ecee:eeff:feee:eeee%14]:123 Jul 10 00:04:06.146528 containerd[1990]: time="2025-07-10T00:04:06.145546048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:06.149982 containerd[1990]: time="2025-07-10T00:04:06.149905396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 10 00:04:06.151533 containerd[1990]: time="2025-07-10T00:04:06.151425748Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:06.159676 containerd[1990]: time="2025-07-10T00:04:06.159585868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:06.161220 containerd[1990]: time="2025-07-10T00:04:06.160971508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 4.71763866s" Jul 10 00:04:06.161220 containerd[1990]: time="2025-07-10T00:04:06.161033680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 10 00:04:06.164602 containerd[1990]: time="2025-07-10T00:04:06.163960504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:04:06.166113 containerd[1990]: time="2025-07-10T00:04:06.166039288Z" level=info msg="CreateContainer within sandbox \"3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 00:04:06.187069 containerd[1990]: time="2025-07-10T00:04:06.185779024Z" level=info msg="Container d44c2331491e323238a69f3d081c3021ef29bc89c5d21c900a8b8a9586ef4a7c: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:04:06.196855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount424206018.mount: Deactivated successfully. Jul 10 00:04:06.227323 containerd[1990]: time="2025-07-10T00:04:06.227265868Z" level=info msg="CreateContainer within sandbox \"3c11fd801db332aeaf1e42509271a934694675844d19628e101b8327aba153ee\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d44c2331491e323238a69f3d081c3021ef29bc89c5d21c900a8b8a9586ef4a7c\"" Jul 10 00:04:06.231073 containerd[1990]: time="2025-07-10T00:04:06.230852788Z" level=info msg="StartContainer for \"d44c2331491e323238a69f3d081c3021ef29bc89c5d21c900a8b8a9586ef4a7c\"" Jul 10 00:04:06.238325 containerd[1990]: time="2025-07-10T00:04:06.238204852Z" level=info msg="connecting to shim d44c2331491e323238a69f3d081c3021ef29bc89c5d21c900a8b8a9586ef4a7c" address="unix:///run/containerd/s/0c75c9f497bb54c0192fee88d7955169eaff684b81bea6aaa70a1bc8c6539d6f" protocol=ttrpc version=3 Jul 10 00:04:06.318731 systemd[1]: Started cri-containerd-d44c2331491e323238a69f3d081c3021ef29bc89c5d21c900a8b8a9586ef4a7c.scope - libcontainer container d44c2331491e323238a69f3d081c3021ef29bc89c5d21c900a8b8a9586ef4a7c. Jul 10 00:04:06.421061 containerd[1990]: time="2025-07-10T00:04:06.420913205Z" level=info msg="StartContainer for \"d44c2331491e323238a69f3d081c3021ef29bc89c5d21c900a8b8a9586ef4a7c\" returns successfully" Jul 10 00:04:07.256042 kubelet[3298]: I0710 00:04:07.255903 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-hvkcr" podStartSLOduration=26.459849104 podStartE2EDuration="36.255879665s" podCreationTimestamp="2025-07-10 00:03:31 +0000 UTC" firstStartedPulling="2025-07-10 00:03:56.367624003 +0000 UTC m=+56.016671715" lastFinishedPulling="2025-07-10 00:04:06.16365448 +0000 UTC m=+65.812702276" observedRunningTime="2025-07-10 00:04:07.255132029 +0000 UTC m=+66.904179765" watchObservedRunningTime="2025-07-10 00:04:07.255879665 +0000 UTC m=+66.904927365" Jul 10 00:04:07.470636 containerd[1990]: time="2025-07-10T00:04:07.470354094Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d44c2331491e323238a69f3d081c3021ef29bc89c5d21c900a8b8a9586ef4a7c\" id:\"a934c5be7ff27139bf50be3ca8fab3f41ee49cc8d0e5d668d8dadbf17cf41a32\" pid:5629 exit_status:1 exited_at:{seconds:1752105847 nanos:469812210}" Jul 10 00:04:08.378649 containerd[1990]: time="2025-07-10T00:04:08.378499927Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d44c2331491e323238a69f3d081c3021ef29bc89c5d21c900a8b8a9586ef4a7c\" id:\"9821a932277c9f1fbf3e9f561f1630dd3b4dda09dafce2934c05b8a9d0436e43\" pid:5659 exit_status:1 exited_at:{seconds:1752105848 nanos:378023143}" Jul 10 00:04:08.965856 systemd[1]: Started sshd@8-172.31.25.230:22-139.178.89.65:40098.service - OpenSSH per-connection server daemon (139.178.89.65:40098). Jul 10 00:04:09.176846 sshd[5670]: Accepted publickey for core from 139.178.89.65 port 40098 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:04:09.180114 sshd-session[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:04:09.190818 systemd-logind[1981]: New session 9 of user core. Jul 10 00:04:09.199671 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:04:09.488441 sshd[5672]: Connection closed by 139.178.89.65 port 40098 Jul 10 00:04:09.488717 sshd-session[5670]: pam_unix(sshd:session): session closed for user core Jul 10 00:04:09.495254 systemd-logind[1981]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:04:09.495761 systemd[1]: sshd@8-172.31.25.230:22-139.178.89.65:40098.service: Deactivated successfully. Jul 10 00:04:09.500559 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:04:09.506298 systemd-logind[1981]: Removed session 9. Jul 10 00:04:11.872910 containerd[1990]: time="2025-07-10T00:04:11.872824968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:11.874881 containerd[1990]: time="2025-07-10T00:04:11.874810908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 10 00:04:11.875721 containerd[1990]: time="2025-07-10T00:04:11.875673792Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:11.880945 containerd[1990]: time="2025-07-10T00:04:11.880867884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:11.883658 containerd[1990]: time="2025-07-10T00:04:11.883517688Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 5.719498228s" Jul 10 00:04:11.883658 containerd[1990]: time="2025-07-10T00:04:11.883569132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 10 00:04:11.887328 containerd[1990]: time="2025-07-10T00:04:11.886929096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 00:04:11.888439 containerd[1990]: time="2025-07-10T00:04:11.888355992Z" level=info msg="CreateContainer within sandbox \"b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:04:11.903229 containerd[1990]: time="2025-07-10T00:04:11.901894428Z" level=info msg="Container 4e834ddb346eeb2d346239039086b9d0b0c44914a69b41c154f5974027c6747a: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:04:11.921542 containerd[1990]: time="2025-07-10T00:04:11.921467304Z" level=info msg="CreateContainer within sandbox \"b19b3cfff78a1caf1adc0c5e894827b70cb45594cbd81022dc79889fd1b22c93\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4e834ddb346eeb2d346239039086b9d0b0c44914a69b41c154f5974027c6747a\"" Jul 10 00:04:11.924562 containerd[1990]: time="2025-07-10T00:04:11.924466140Z" level=info msg="StartContainer for \"4e834ddb346eeb2d346239039086b9d0b0c44914a69b41c154f5974027c6747a\"" Jul 10 00:04:11.927044 containerd[1990]: time="2025-07-10T00:04:11.926986800Z" level=info msg="connecting to shim 4e834ddb346eeb2d346239039086b9d0b0c44914a69b41c154f5974027c6747a" address="unix:///run/containerd/s/f0a45c990956fc6010635c45ba05a8d8cdd03de6932a7bc745378d284e9982c8" protocol=ttrpc version=3 Jul 10 00:04:11.972763 systemd[1]: Started cri-containerd-4e834ddb346eeb2d346239039086b9d0b0c44914a69b41c154f5974027c6747a.scope - libcontainer container 4e834ddb346eeb2d346239039086b9d0b0c44914a69b41c154f5974027c6747a. Jul 10 00:04:12.062240 containerd[1990]: time="2025-07-10T00:04:12.062171253Z" level=info msg="StartContainer for \"4e834ddb346eeb2d346239039086b9d0b0c44914a69b41c154f5974027c6747a\" returns successfully" Jul 10 00:04:12.268516 kubelet[3298]: I0710 00:04:12.268238 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-94958988c-ktf4v" podStartSLOduration=36.682083838 podStartE2EDuration="51.268148194s" podCreationTimestamp="2025-07-10 00:03:21 +0000 UTC" firstStartedPulling="2025-07-10 00:03:57.298651412 +0000 UTC m=+56.947699124" lastFinishedPulling="2025-07-10 00:04:11.884715768 +0000 UTC m=+71.533763480" observedRunningTime="2025-07-10 00:04:12.266449822 +0000 UTC m=+71.915497558" watchObservedRunningTime="2025-07-10 00:04:12.268148194 +0000 UTC m=+71.917195906" Jul 10 00:04:13.255640 kubelet[3298]: I0710 00:04:13.255564 3298 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:04:13.415772 containerd[1990]: time="2025-07-10T00:04:13.415716756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:13.417731 containerd[1990]: time="2025-07-10T00:04:13.417677148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 10 00:04:13.420107 containerd[1990]: time="2025-07-10T00:04:13.420028788Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:13.424900 containerd[1990]: time="2025-07-10T00:04:13.424747800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:13.427299 containerd[1990]: time="2025-07-10T00:04:13.427096716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.540104656s" Jul 10 00:04:13.427299 containerd[1990]: time="2025-07-10T00:04:13.427260900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 10 00:04:13.431346 containerd[1990]: time="2025-07-10T00:04:13.431041992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 00:04:13.437506 containerd[1990]: time="2025-07-10T00:04:13.437271288Z" level=info msg="CreateContainer within sandbox \"b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 00:04:13.462526 containerd[1990]: time="2025-07-10T00:04:13.462257508Z" level=info msg="Container 23b0ef0e1c9480d6ee5f69b334cdd7915479ea98039de097dd9625d37e3594df: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:04:13.505947 containerd[1990]: time="2025-07-10T00:04:13.505727160Z" level=info msg="CreateContainer within sandbox \"b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"23b0ef0e1c9480d6ee5f69b334cdd7915479ea98039de097dd9625d37e3594df\"" Jul 10 00:04:13.508764 containerd[1990]: time="2025-07-10T00:04:13.508635924Z" level=info msg="StartContainer for \"23b0ef0e1c9480d6ee5f69b334cdd7915479ea98039de097dd9625d37e3594df\"" Jul 10 00:04:13.514576 containerd[1990]: time="2025-07-10T00:04:13.514364856Z" level=info msg="connecting to shim 23b0ef0e1c9480d6ee5f69b334cdd7915479ea98039de097dd9625d37e3594df" address="unix:///run/containerd/s/359df88032987234dd14bccccf87f01cb3ffc594b6f5201d746733cdde584d23" protocol=ttrpc version=3 Jul 10 00:04:13.557780 systemd[1]: Started cri-containerd-23b0ef0e1c9480d6ee5f69b334cdd7915479ea98039de097dd9625d37e3594df.scope - libcontainer container 23b0ef0e1c9480d6ee5f69b334cdd7915479ea98039de097dd9625d37e3594df. Jul 10 00:04:13.668151 containerd[1990]: time="2025-07-10T00:04:13.668044021Z" level=info msg="StartContainer for \"23b0ef0e1c9480d6ee5f69b334cdd7915479ea98039de097dd9625d37e3594df\" returns successfully" Jul 10 00:04:14.266173 kubelet[3298]: I0710 00:04:14.266128 3298 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:04:14.529337 systemd[1]: Started sshd@9-172.31.25.230:22-139.178.89.65:50810.service - OpenSSH per-connection server daemon (139.178.89.65:50810). Jul 10 00:04:14.755917 sshd[5767]: Accepted publickey for core from 139.178.89.65 port 50810 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:04:14.760590 sshd-session[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:04:14.780737 systemd-logind[1981]: New session 10 of user core. Jul 10 00:04:14.787972 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:04:15.185629 sshd[5769]: Connection closed by 139.178.89.65 port 50810 Jul 10 00:04:15.186054 sshd-session[5767]: pam_unix(sshd:session): session closed for user core Jul 10 00:04:15.199107 systemd[1]: sshd@9-172.31.25.230:22-139.178.89.65:50810.service: Deactivated successfully. Jul 10 00:04:15.206487 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:04:15.211267 systemd-logind[1981]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:04:15.233607 systemd[1]: Started sshd@10-172.31.25.230:22-139.178.89.65:50814.service - OpenSSH per-connection server daemon (139.178.89.65:50814). Jul 10 00:04:15.244859 systemd-logind[1981]: Removed session 10. Jul 10 00:04:15.444708 sshd[5786]: Accepted publickey for core from 139.178.89.65 port 50814 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:04:15.446823 sshd-session[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:04:15.456324 systemd-logind[1981]: New session 11 of user core. Jul 10 00:04:15.463650 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:04:15.971800 sshd[5788]: Connection closed by 139.178.89.65 port 50814 Jul 10 00:04:15.971421 sshd-session[5786]: pam_unix(sshd:session): session closed for user core Jul 10 00:04:15.987434 systemd[1]: sshd@10-172.31.25.230:22-139.178.89.65:50814.service: Deactivated successfully. Jul 10 00:04:15.994349 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:04:16.041477 systemd-logind[1981]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:04:16.046114 systemd[1]: Started sshd@11-172.31.25.230:22-139.178.89.65:50820.service - OpenSSH per-connection server daemon (139.178.89.65:50820). Jul 10 00:04:16.055809 systemd-logind[1981]: Removed session 11. Jul 10 00:04:16.306041 sshd[5798]: Accepted publickey for core from 139.178.89.65 port 50820 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:04:16.308793 sshd-session[5798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:04:16.330903 systemd-logind[1981]: New session 12 of user core. Jul 10 00:04:16.345443 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:04:16.744802 sshd[5804]: Connection closed by 139.178.89.65 port 50820 Jul 10 00:04:16.747589 sshd-session[5798]: pam_unix(sshd:session): session closed for user core Jul 10 00:04:16.758475 systemd-logind[1981]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:04:16.759669 systemd[1]: sshd@11-172.31.25.230:22-139.178.89.65:50820.service: Deactivated successfully. Jul 10 00:04:16.765995 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:04:16.772964 systemd-logind[1981]: Removed session 12. Jul 10 00:04:17.621264 containerd[1990]: time="2025-07-10T00:04:17.621196973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:17.623369 containerd[1990]: time="2025-07-10T00:04:17.623291837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 10 00:04:17.626032 containerd[1990]: time="2025-07-10T00:04:17.625954229Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:17.631528 containerd[1990]: time="2025-07-10T00:04:17.631461065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:17.633489 containerd[1990]: time="2025-07-10T00:04:17.633044861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 4.201804665s" Jul 10 00:04:17.633489 containerd[1990]: time="2025-07-10T00:04:17.633104573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 10 00:04:17.634988 containerd[1990]: time="2025-07-10T00:04:17.634942601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:04:17.668752 containerd[1990]: time="2025-07-10T00:04:17.668685809Z" level=info msg="CreateContainer within sandbox \"4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 00:04:17.687476 containerd[1990]: time="2025-07-10T00:04:17.685346453Z" level=info msg="Container a0a5ca889c6c1d2329ef304f8ada1b7e67940c71a211a573276dd2d148a78dc1: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:04:17.711719 containerd[1990]: time="2025-07-10T00:04:17.711659645Z" level=info msg="CreateContainer within sandbox \"4bfbe30ed67e853353d4bdbcedf1e4f68227bfc9a48acb09a6f35863f8690a14\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a0a5ca889c6c1d2329ef304f8ada1b7e67940c71a211a573276dd2d148a78dc1\"" Jul 10 00:04:17.715334 containerd[1990]: time="2025-07-10T00:04:17.715288817Z" level=info msg="StartContainer for \"a0a5ca889c6c1d2329ef304f8ada1b7e67940c71a211a573276dd2d148a78dc1\"" Jul 10 00:04:17.719129 containerd[1990]: time="2025-07-10T00:04:17.718707353Z" level=info msg="connecting to shim a0a5ca889c6c1d2329ef304f8ada1b7e67940c71a211a573276dd2d148a78dc1" address="unix:///run/containerd/s/99ca1c6f4e6b14eac02a3c8d24a0af667931d6045ba80303d2fbde9810cdf8ba" protocol=ttrpc version=3 Jul 10 00:04:17.766706 systemd[1]: Started cri-containerd-a0a5ca889c6c1d2329ef304f8ada1b7e67940c71a211a573276dd2d148a78dc1.scope - libcontainer container a0a5ca889c6c1d2329ef304f8ada1b7e67940c71a211a573276dd2d148a78dc1. Jul 10 00:04:17.858624 containerd[1990]: time="2025-07-10T00:04:17.858561426Z" level=info msg="StartContainer for \"a0a5ca889c6c1d2329ef304f8ada1b7e67940c71a211a573276dd2d148a78dc1\" returns successfully" Jul 10 00:04:17.978540 containerd[1990]: time="2025-07-10T00:04:17.977598907Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:17.981824 containerd[1990]: time="2025-07-10T00:04:17.981772591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 00:04:17.985377 containerd[1990]: time="2025-07-10T00:04:17.985300927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 350.104106ms" Jul 10 00:04:17.985714 containerd[1990]: time="2025-07-10T00:04:17.985652563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 10 00:04:17.988313 containerd[1990]: time="2025-07-10T00:04:17.988095535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 00:04:17.991707 containerd[1990]: time="2025-07-10T00:04:17.991021663Z" level=info msg="CreateContainer within sandbox \"7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:04:18.010483 containerd[1990]: time="2025-07-10T00:04:18.010380291Z" level=info msg="Container a15d9250723b23238c4238240ea431c3b256d3090de4b4328d2a4cbacad66acc: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:04:18.052460 containerd[1990]: time="2025-07-10T00:04:18.052343631Z" level=info msg="CreateContainer within sandbox \"7045a5214b3968d7023c4cbba4cd69cbd9221287e96f6ee22801738bc8eee5a1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a15d9250723b23238c4238240ea431c3b256d3090de4b4328d2a4cbacad66acc\"" Jul 10 00:04:18.059753 containerd[1990]: time="2025-07-10T00:04:18.059683791Z" level=info msg="StartContainer for \"a15d9250723b23238c4238240ea431c3b256d3090de4b4328d2a4cbacad66acc\"" Jul 10 00:04:18.067352 containerd[1990]: time="2025-07-10T00:04:18.066897063Z" level=info msg="connecting to shim a15d9250723b23238c4238240ea431c3b256d3090de4b4328d2a4cbacad66acc" address="unix:///run/containerd/s/7b64091d13d8751fa33a6405b7d7692b01f01004b7de8e1e8e0abac6cc8d808d" protocol=ttrpc version=3 Jul 10 00:04:18.136613 systemd[1]: Started cri-containerd-a15d9250723b23238c4238240ea431c3b256d3090de4b4328d2a4cbacad66acc.scope - libcontainer container a15d9250723b23238c4238240ea431c3b256d3090de4b4328d2a4cbacad66acc. Jul 10 00:04:18.271107 containerd[1990]: time="2025-07-10T00:04:18.271036480Z" level=info msg="StartContainer for \"a15d9250723b23238c4238240ea431c3b256d3090de4b4328d2a4cbacad66acc\" returns successfully" Jul 10 00:04:18.340485 kubelet[3298]: I0710 00:04:18.339488 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-94958988c-c7snx" podStartSLOduration=40.564521532 podStartE2EDuration="57.339458272s" podCreationTimestamp="2025-07-10 00:03:21 +0000 UTC" firstStartedPulling="2025-07-10 00:04:01.212167823 +0000 UTC m=+60.861215535" lastFinishedPulling="2025-07-10 00:04:17.987104563 +0000 UTC m=+77.636152275" observedRunningTime="2025-07-10 00:04:18.337663276 +0000 UTC m=+77.986711000" watchObservedRunningTime="2025-07-10 00:04:18.339458272 +0000 UTC m=+77.988506080" Jul 10 00:04:18.463427 containerd[1990]: time="2025-07-10T00:04:18.463338653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0a5ca889c6c1d2329ef304f8ada1b7e67940c71a211a573276dd2d148a78dc1\" id:\"667c605f1ccf6a44b73266be29a6ef6a9c5740aa4a25a4f77486b5b3ffb2cfd3\" pid:5907 exited_at:{seconds:1752105858 nanos:460703729}" Jul 10 00:04:18.492594 kubelet[3298]: I0710 00:04:18.492376 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f9944db5-x5s7z" podStartSLOduration=28.362049659 podStartE2EDuration="46.491657009s" podCreationTimestamp="2025-07-10 00:03:32 +0000 UTC" firstStartedPulling="2025-07-10 00:03:59.505163639 +0000 UTC m=+59.154211351" lastFinishedPulling="2025-07-10 00:04:17.634770977 +0000 UTC m=+77.283818701" observedRunningTime="2025-07-10 00:04:18.385813229 +0000 UTC m=+78.034860965" watchObservedRunningTime="2025-07-10 00:04:18.491657009 +0000 UTC m=+78.140704721" Jul 10 00:04:19.969441 containerd[1990]: time="2025-07-10T00:04:19.968366252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:19.971439 containerd[1990]: time="2025-07-10T00:04:19.970909628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 10 00:04:19.972785 containerd[1990]: time="2025-07-10T00:04:19.972647216Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:19.978845 containerd[1990]: time="2025-07-10T00:04:19.978770012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:04:19.982575 containerd[1990]: time="2025-07-10T00:04:19.982434368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.994272389s" Jul 10 00:04:19.982895 containerd[1990]: time="2025-07-10T00:04:19.982531556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 10 00:04:19.993558 containerd[1990]: time="2025-07-10T00:04:19.992364033Z" level=info msg="CreateContainer within sandbox \"b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 00:04:20.009478 containerd[1990]: time="2025-07-10T00:04:20.009180065Z" level=info msg="Container 61ec757bdbb193f247b0bbd38d86bc01ee1e7538f74d31e6599ff7764e03cff1: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:04:20.030255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3272439751.mount: Deactivated successfully. Jul 10 00:04:20.040887 containerd[1990]: time="2025-07-10T00:04:20.040832909Z" level=info msg="CreateContainer within sandbox \"b850e6fab7c7b9c95ba2dbdbff62c524a5ce32ecbe94cc7ec395035449d428f3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"61ec757bdbb193f247b0bbd38d86bc01ee1e7538f74d31e6599ff7764e03cff1\"" Jul 10 00:04:20.042420 containerd[1990]: time="2025-07-10T00:04:20.042129089Z" level=info msg="StartContainer for \"61ec757bdbb193f247b0bbd38d86bc01ee1e7538f74d31e6599ff7764e03cff1\"" Jul 10 00:04:20.045846 containerd[1990]: time="2025-07-10T00:04:20.045793937Z" level=info msg="connecting to shim 61ec757bdbb193f247b0bbd38d86bc01ee1e7538f74d31e6599ff7764e03cff1" address="unix:///run/containerd/s/359df88032987234dd14bccccf87f01cb3ffc594b6f5201d746733cdde584d23" protocol=ttrpc version=3 Jul 10 00:04:20.098511 systemd[1]: Started cri-containerd-61ec757bdbb193f247b0bbd38d86bc01ee1e7538f74d31e6599ff7764e03cff1.scope - libcontainer container 61ec757bdbb193f247b0bbd38d86bc01ee1e7538f74d31e6599ff7764e03cff1. Jul 10 00:04:20.248570 containerd[1990]: time="2025-07-10T00:04:20.248142210Z" level=info msg="StartContainer for \"61ec757bdbb193f247b0bbd38d86bc01ee1e7538f74d31e6599ff7764e03cff1\" returns successfully" Jul 10 00:04:20.358834 kubelet[3298]: I0710 00:04:20.358577 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cvfdh" podStartSLOduration=27.350819037 podStartE2EDuration="48.358549782s" podCreationTimestamp="2025-07-10 00:03:32 +0000 UTC" firstStartedPulling="2025-07-10 00:03:58.977578788 +0000 UTC m=+58.626626500" lastFinishedPulling="2025-07-10 00:04:19.985309533 +0000 UTC m=+79.634357245" observedRunningTime="2025-07-10 00:04:20.357913902 +0000 UTC m=+80.006961614" watchObservedRunningTime="2025-07-10 00:04:20.358549782 +0000 UTC m=+80.007597686" Jul 10 00:04:20.811623 kubelet[3298]: I0710 00:04:20.811582 3298 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 00:04:20.812216 kubelet[3298]: I0710 00:04:20.812118 3298 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 00:04:21.786252 systemd[1]: Started sshd@12-172.31.25.230:22-139.178.89.65:35784.service - OpenSSH per-connection server daemon (139.178.89.65:35784). Jul 10 00:04:22.001363 sshd[5971]: Accepted publickey for core from 139.178.89.65 port 35784 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:04:22.004502 sshd-session[5971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:04:22.013087 systemd-logind[1981]: New session 13 of user core. Jul 10 00:04:22.019663 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:04:22.291802 sshd[5974]: Connection closed by 139.178.89.65 port 35784 Jul 10 00:04:22.292729 sshd-session[5971]: pam_unix(sshd:session): session closed for user core Jul 10 00:04:22.299734 systemd[1]: sshd@12-172.31.25.230:22-139.178.89.65:35784.service: Deactivated successfully. Jul 10 00:04:22.305778 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:04:22.308706 systemd-logind[1981]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:04:22.313833 systemd-logind[1981]: Removed session 13. Jul 10 00:04:24.142308 containerd[1990]: time="2025-07-10T00:04:24.142175157Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12f71c73efd80f2b2b5577d3bc8f52d120989e4446d381b182dc3178fe735e46\" id:\"61285ee2017bd6c476fbfc909105098ee8db8f049626491aff3258da63aa81de\" pid:6000 exit_status:1 exited_at:{seconds:1752105864 nanos:141731217}" Jul 10 00:04:27.333063 systemd[1]: Started sshd@13-172.31.25.230:22-139.178.89.65:35794.service - OpenSSH per-connection server daemon (139.178.89.65:35794). Jul 10 00:04:27.554802 sshd[6013]: Accepted publickey for core from 139.178.89.65 port 35794 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:04:27.558253 sshd-session[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:04:27.570832 systemd-logind[1981]: New session 14 of user core. Jul 10 00:04:27.578795 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:04:27.846441 sshd[6015]: Connection closed by 139.178.89.65 port 35794 Jul 10 00:04:27.847804 sshd-session[6013]: pam_unix(sshd:session): session closed for user core Jul 10 00:04:27.857570 systemd[1]: sshd@13-172.31.25.230:22-139.178.89.65:35794.service: Deactivated successfully. Jul 10 00:04:27.862215 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:04:27.863800 systemd-logind[1981]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:04:27.867810 systemd-logind[1981]: Removed session 14. Jul 10 00:04:32.915894 systemd[1]: Started sshd@14-172.31.25.230:22-139.178.89.65:53124.service - OpenSSH per-connection server daemon (139.178.89.65:53124). Jul 10 00:04:33.125795 sshd[6040]: Accepted publickey for core from 139.178.89.65 port 53124 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:04:33.127513 containerd[1990]: time="2025-07-10T00:04:33.127224942Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d44c2331491e323238a69f3d081c3021ef29bc89c5d21c900a8b8a9586ef4a7c\" id:\"dac5cf6a5d9ea7aa29e19b172521bb146bbdf7c334a99b3ee0abfbc2889689af\" pid:6039 exited_at:{seconds:1752105873 nanos:123658866}" Jul 10 00:04:33.130552 sshd-session[6040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:04:33.142784 systemd-logind[1981]: New session 15 of user core. Jul 10 00:04:33.146922 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:04:33.472132 sshd[6051]: Connection closed by 139.178.89.65 port 53124 Jul 10 00:04:33.473818 sshd-session[6040]: pam_unix(sshd:session): session closed for user core Jul 10 00:04:33.482731 systemd[1]: sshd@14-172.31.25.230:22-139.178.89.65:53124.service: Deactivated successfully. Jul 10 00:04:33.490596 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:04:33.496782 systemd-logind[1981]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:04:33.501175 systemd-logind[1981]: Removed session 15. Jul 10 00:04:38.381331 containerd[1990]: time="2025-07-10T00:04:38.381263100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d44c2331491e323238a69f3d081c3021ef29bc89c5d21c900a8b8a9586ef4a7c\" id:\"a4469852db3577a7ba90e35c50733acab77fec66995d75e582946abea6387177\" pid:6083 exited_at:{seconds:1752105878 nanos:380598000}" Jul 10 00:04:38.511437 systemd[1]: Started sshd@15-172.31.25.230:22-139.178.89.65:53134.service - OpenSSH per-connection server daemon (139.178.89.65:53134). Jul 10 00:04:38.732471 sshd[6097]: Accepted publickey for core from 139.178.89.65 port 53134 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:04:38.734882 sshd-session[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:04:38.745012 systemd-logind[1981]: New session 16 of user core. Jul 10 00:04:38.755679 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:04:39.021982 sshd[6099]: Connection closed by 139.178.89.65 port 53134 Jul 10 00:04:39.023276 sshd-session[6097]: pam_unix(sshd:session): session closed for user core Jul 10 00:04:39.030565 systemd[1]: sshd@15-172.31.25.230:22-139.178.89.65:53134.service: Deactivated successfully. Jul 10 00:04:39.034158 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:04:39.036638 systemd-logind[1981]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:04:39.040131 systemd-logind[1981]: Removed session 16. Jul 10 00:04:39.060160 systemd[1]: Started sshd@16-172.31.25.230:22-139.178.89.65:53142.service - OpenSSH per-connection server daemon (139.178.89.65:53142). Jul 10 00:04:39.268410 sshd[6111]: Accepted publickey for core from 139.178.89.65 port 53142 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:04:39.270941 sshd-session[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:04:39.279516 systemd-logind[1981]: New session 17 of user core. Jul 10 00:04:39.289657 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:04:39.984357 sshd[6113]: Connection closed by 139.178.89.65 port 53142 Jul 10 00:04:39.985781 sshd-session[6111]: pam_unix(sshd:session): session closed for user core Jul 10 00:04:39.992684 systemd[1]: sshd@16-172.31.25.230:22-139.178.89.65:53142.service: Deactivated successfully. Jul 10 00:04:39.996770 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:04:39.998968 systemd-logind[1981]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:04:40.002870 systemd-logind[1981]: Removed session 17. Jul 10 00:04:40.020876 systemd[1]: Started sshd@17-172.31.25.230:22-139.178.89.65:53416.service - OpenSSH per-connection server daemon (139.178.89.65:53416). Jul 10 00:04:40.226597 sshd[6123]: Accepted publickey for core from 139.178.89.65 port 53416 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:04:40.229206 sshd-session[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:04:40.241627 systemd-logind[1981]: New session 18 of user core. Jul 10 00:04:40.250696 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:04:41.713352 sshd[6125]: Connection closed by 139.178.89.65 port 53416 Jul 10 00:04:41.713856 sshd-session[6123]: pam_unix(sshd:session): session closed for user core Jul 10 00:04:41.733978 systemd[1]: sshd@17-172.31.25.230:22-139.178.89.65:53416.service: Deactivated successfully. Jul 10 00:04:41.745422 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:04:41.753566 systemd-logind[1981]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:04:41.786021 systemd[1]: Started sshd@18-172.31.25.230:22-139.178.89.65:53422.service - OpenSSH per-connection server daemon (139.178.89.65:53422). Jul 10 00:04:41.792689 systemd-logind[1981]: Removed session 18. Jul 10 00:04:42.005945 sshd[6147]: Accepted publickey for core from 139.178.89.65 port 53422 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:04:42.008169 sshd-session[6147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:04:42.017022 systemd-logind[1981]: New session 19 of user core. Jul 10 00:04:42.025680 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:04:42.578550 sshd[6150]: Connection closed by 139.178.89.65 port 53422 Jul 10 00:04:42.579611 sshd-session[6147]: pam_unix(sshd:session): session closed for user core Jul 10 00:04:42.587799 systemd[1]: sshd@18-172.31.25.230:22-139.178.89.65:53422.service: Deactivated successfully. Jul 10 00:04:42.594061 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:04:42.595830 systemd-logind[1981]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:04:42.599489 systemd-logind[1981]: Removed session 19. Jul 10 00:04:42.617289 systemd[1]: Started sshd@19-172.31.25.230:22-139.178.89.65:53428.service - OpenSSH per-connection server daemon (139.178.89.65:53428). Jul 10 00:04:42.816973 sshd[6160]: Accepted publickey for core from 139.178.89.65 port 53428 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:04:42.819665 sshd-session[6160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:04:42.827968 systemd-logind[1981]: New session 20 of user core. Jul 10 00:04:42.837706 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:04:43.090922 sshd[6162]: Connection closed by 139.178.89.65 port 53428 Jul 10 00:04:43.092008 sshd-session[6160]: pam_unix(sshd:session): session closed for user core Jul 10 00:04:43.097503 systemd[1]: sshd@19-172.31.25.230:22-139.178.89.65:53428.service: Deactivated successfully. Jul 10 00:04:43.101070 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:04:43.107257 systemd-logind[1981]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:04:43.109692 systemd-logind[1981]: Removed session 20. Jul 10 00:04:48.134508 systemd[1]: Started sshd@20-172.31.25.230:22-139.178.89.65:53430.service - OpenSSH per-connection server daemon (139.178.89.65:53430). Jul 10 00:04:48.333543 sshd[6174]: Accepted publickey for core from 139.178.89.65 port 53430 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:04:48.338426 sshd-session[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:04:48.351988 systemd-logind[1981]: New session 21 of user core. Jul 10 00:04:48.357998 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:04:48.411541 containerd[1990]: time="2025-07-10T00:04:48.410414710Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0a5ca889c6c1d2329ef304f8ada1b7e67940c71a211a573276dd2d148a78dc1\" id:\"37f6963b8c5910a8a949998c8934ca00cdf92f7d139226e2fd999a2402c151d4\" pid:6188 exited_at:{seconds:1752105888 nanos:408308182}" Jul 10 00:04:48.624232 sshd[6194]: Connection closed by 139.178.89.65 port 53430 Jul 10 00:04:48.625080 sshd-session[6174]: pam_unix(sshd:session): session closed for user core Jul 10 00:04:48.632808 systemd[1]: sshd@20-172.31.25.230:22-139.178.89.65:53430.service: Deactivated successfully. Jul 10 00:04:48.637103 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:04:48.640621 systemd-logind[1981]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:04:48.646124 systemd-logind[1981]: Removed session 21. Jul 10 00:04:53.662964 systemd[1]: Started sshd@21-172.31.25.230:22-139.178.89.65:53876.service - OpenSSH per-connection server daemon (139.178.89.65:53876). Jul 10 00:04:53.877052 sshd[6210]: Accepted publickey for core from 139.178.89.65 port 53876 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:04:53.880722 sshd-session[6210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:04:53.892664 systemd-logind[1981]: New session 22 of user core. Jul 10 00:04:53.901741 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:04:54.200236 containerd[1990]: time="2025-07-10T00:04:54.200175722Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12f71c73efd80f2b2b5577d3bc8f52d120989e4446d381b182dc3178fe735e46\" id:\"521f4dbd7f97c951ef47dcabb9061c5e6280b40bc2f6e49bca2bda7b48c81ba0\" pid:6232 exited_at:{seconds:1752105894 nanos:199099406}" Jul 10 00:04:54.213435 sshd[6212]: Connection closed by 139.178.89.65 port 53876 Jul 10 00:04:54.211998 sshd-session[6210]: pam_unix(sshd:session): session closed for user core Jul 10 00:04:54.225887 systemd[1]: sshd@21-172.31.25.230:22-139.178.89.65:53876.service: Deactivated successfully. Jul 10 00:04:54.235176 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:04:54.241842 systemd-logind[1981]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:04:54.246682 systemd-logind[1981]: Removed session 22. Jul 10 00:04:59.252849 systemd[1]: Started sshd@22-172.31.25.230:22-139.178.89.65:53882.service - OpenSSH per-connection server daemon (139.178.89.65:53882). Jul 10 00:04:59.481890 sshd[6252]: Accepted publickey for core from 139.178.89.65 port 53882 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:04:59.485125 sshd-session[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:04:59.496527 systemd-logind[1981]: New session 23 of user core. Jul 10 00:04:59.506728 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:04:59.812703 sshd[6254]: Connection closed by 139.178.89.65 port 53882 Jul 10 00:04:59.816025 sshd-session[6252]: pam_unix(sshd:session): session closed for user core Jul 10 00:04:59.825830 systemd[1]: sshd@22-172.31.25.230:22-139.178.89.65:53882.service: Deactivated successfully. Jul 10 00:04:59.835087 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:04:59.838812 systemd-logind[1981]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:04:59.844941 systemd-logind[1981]: Removed session 23. Jul 10 00:05:04.854881 systemd[1]: Started sshd@23-172.31.25.230:22-139.178.89.65:51938.service - OpenSSH per-connection server daemon (139.178.89.65:51938). Jul 10 00:05:05.074608 sshd[6269]: Accepted publickey for core from 139.178.89.65 port 51938 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:05:05.079264 sshd-session[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:05:05.088585 systemd-logind[1981]: New session 24 of user core. Jul 10 00:05:05.096963 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:05:05.436779 sshd[6271]: Connection closed by 139.178.89.65 port 51938 Jul 10 00:05:05.439910 sshd-session[6269]: pam_unix(sshd:session): session closed for user core Jul 10 00:05:05.450971 systemd[1]: sshd@23-172.31.25.230:22-139.178.89.65:51938.service: Deactivated successfully. Jul 10 00:05:05.459315 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:05:05.463551 systemd-logind[1981]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:05:05.469084 systemd-logind[1981]: Removed session 24. Jul 10 00:05:08.555692 containerd[1990]: time="2025-07-10T00:05:08.555247122Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d44c2331491e323238a69f3d081c3021ef29bc89c5d21c900a8b8a9586ef4a7c\" id:\"f12bdb88b1d6acf17a7ad79869ac48ba7c16c4467ead8ac6e9529a96219cf64e\" pid:6297 exited_at:{seconds:1752105908 nanos:554775390}" Jul 10 00:05:10.474784 systemd[1]: Started sshd@24-172.31.25.230:22-139.178.89.65:51692.service - OpenSSH per-connection server daemon (139.178.89.65:51692). Jul 10 00:05:10.693429 sshd[6308]: Accepted publickey for core from 139.178.89.65 port 51692 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:05:10.697347 sshd-session[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:05:10.711139 systemd-logind[1981]: New session 25 of user core. Jul 10 00:05:10.721716 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 00:05:11.010427 sshd[6310]: Connection closed by 139.178.89.65 port 51692 Jul 10 00:05:11.011486 sshd-session[6308]: pam_unix(sshd:session): session closed for user core Jul 10 00:05:11.019841 systemd[1]: sshd@24-172.31.25.230:22-139.178.89.65:51692.service: Deactivated successfully. Jul 10 00:05:11.028073 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:05:11.032247 systemd-logind[1981]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:05:11.037791 systemd-logind[1981]: Removed session 25. Jul 10 00:05:15.018715 containerd[1990]: time="2025-07-10T00:05:15.018378082Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0a5ca889c6c1d2329ef304f8ada1b7e67940c71a211a573276dd2d148a78dc1\" id:\"c8d3329aa30625e7e5f6ac74465f80a809363d8f15b8242eee2249572195b690\" pid:6334 exited_at:{seconds:1752105915 nanos:18026038}" Jul 10 00:05:16.059466 systemd[1]: Started sshd@25-172.31.25.230:22-139.178.89.65:51706.service - OpenSSH per-connection server daemon (139.178.89.65:51706). Jul 10 00:05:16.266458 sshd[6345]: Accepted publickey for core from 139.178.89.65 port 51706 ssh2: RSA SHA256:V/GqA9wd+OVwK90q9ciGk9yrx6izpb+btxAqtX7Qkhw Jul 10 00:05:16.268724 sshd-session[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:05:16.278913 systemd-logind[1981]: New session 26 of user core. Jul 10 00:05:16.287738 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 00:05:16.618296 sshd[6347]: Connection closed by 139.178.89.65 port 51706 Jul 10 00:05:16.622595 sshd-session[6345]: pam_unix(sshd:session): session closed for user core Jul 10 00:05:16.630498 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:05:16.632358 systemd[1]: sshd@25-172.31.25.230:22-139.178.89.65:51706.service: Deactivated successfully. Jul 10 00:05:16.641788 systemd-logind[1981]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:05:16.646919 systemd-logind[1981]: Removed session 26. Jul 10 00:05:18.399681 containerd[1990]: time="2025-07-10T00:05:18.399609423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0a5ca889c6c1d2329ef304f8ada1b7e67940c71a211a573276dd2d148a78dc1\" id:\"da53fbfbd9aa40b497f2a1fb15b2f0ed4b9e3b9e14c2646be0ed29324a4d8d26\" pid:6376 exited_at:{seconds:1752105918 nanos:398776455}" Jul 10 00:05:24.134992 containerd[1990]: time="2025-07-10T00:05:24.134796031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12f71c73efd80f2b2b5577d3bc8f52d120989e4446d381b182dc3178fe735e46\" id:\"5b81631f7100c475c154bf3d90e2021ce8d13a75764652e0fd119600bd81c93b\" pid:6399 exited_at:{seconds:1752105924 nanos:134014819}" Jul 10 00:05:30.854281 systemd[1]: cri-containerd-76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560.scope: Deactivated successfully. Jul 10 00:05:30.854927 systemd[1]: cri-containerd-76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560.scope: Consumed 27.564s CPU time, 91.7M memory peak, 480K read from disk. Jul 10 00:05:30.864677 containerd[1990]: time="2025-07-10T00:05:30.864614261Z" level=info msg="received exit event container_id:\"76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560\" id:\"76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560\" pid:3799 exit_status:1 exited_at:{seconds:1752105930 nanos:864051869}" Jul 10 00:05:30.865832 containerd[1990]: time="2025-07-10T00:05:30.864967433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560\" id:\"76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560\" pid:3799 exit_status:1 exited_at:{seconds:1752105930 nanos:864051869}" Jul 10 00:05:30.910214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560-rootfs.mount: Deactivated successfully. Jul 10 00:05:31.545181 systemd[1]: cri-containerd-6976546e0ce54b4ac0edf95fb7114da9648ec996afecac809a041f9c2cac9156.scope: Deactivated successfully. Jul 10 00:05:31.545807 systemd[1]: cri-containerd-6976546e0ce54b4ac0edf95fb7114da9648ec996afecac809a041f9c2cac9156.scope: Consumed 6.644s CPU time, 62.9M memory peak, 64K read from disk. Jul 10 00:05:31.555289 containerd[1990]: time="2025-07-10T00:05:31.555216712Z" level=info msg="received exit event container_id:\"6976546e0ce54b4ac0edf95fb7114da9648ec996afecac809a041f9c2cac9156\" id:\"6976546e0ce54b4ac0edf95fb7114da9648ec996afecac809a041f9c2cac9156\" pid:3126 exit_status:1 exited_at:{seconds:1752105931 nanos:554721208}" Jul 10 00:05:31.555799 containerd[1990]: time="2025-07-10T00:05:31.555619756Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6976546e0ce54b4ac0edf95fb7114da9648ec996afecac809a041f9c2cac9156\" id:\"6976546e0ce54b4ac0edf95fb7114da9648ec996afecac809a041f9c2cac9156\" pid:3126 exit_status:1 exited_at:{seconds:1752105931 nanos:554721208}" Jul 10 00:05:31.610649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6976546e0ce54b4ac0edf95fb7114da9648ec996afecac809a041f9c2cac9156-rootfs.mount: Deactivated successfully. Jul 10 00:05:31.621450 kubelet[3298]: I0710 00:05:31.621351 3298 scope.go:117] "RemoveContainer" containerID="76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560" Jul 10 00:05:31.644041 containerd[1990]: time="2025-07-10T00:05:31.643947280Z" level=info msg="CreateContainer within sandbox \"c2b0d9157a5dc212d516bb23a8f3b998b5c69890400c7374246d6840d9314296\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 10 00:05:31.665354 containerd[1990]: time="2025-07-10T00:05:31.663301505Z" level=info msg="Container d55d4b030749bee0b74929d92a0ae64211709c603602bbab9d4a988deabc60c9: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:05:31.671988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount876542199.mount: Deactivated successfully. Jul 10 00:05:31.683567 containerd[1990]: time="2025-07-10T00:05:31.683503793Z" level=info msg="CreateContainer within sandbox \"c2b0d9157a5dc212d516bb23a8f3b998b5c69890400c7374246d6840d9314296\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d55d4b030749bee0b74929d92a0ae64211709c603602bbab9d4a988deabc60c9\"" Jul 10 00:05:31.684598 containerd[1990]: time="2025-07-10T00:05:31.684548165Z" level=info msg="StartContainer for \"d55d4b030749bee0b74929d92a0ae64211709c603602bbab9d4a988deabc60c9\"" Jul 10 00:05:31.686245 containerd[1990]: time="2025-07-10T00:05:31.686187389Z" level=info msg="connecting to shim d55d4b030749bee0b74929d92a0ae64211709c603602bbab9d4a988deabc60c9" address="unix:///run/containerd/s/d03f7466a718fc407ae638b71015e66dfbc89e3538dbf88623b5964230385291" protocol=ttrpc version=3 Jul 10 00:05:31.739691 systemd[1]: Started cri-containerd-d55d4b030749bee0b74929d92a0ae64211709c603602bbab9d4a988deabc60c9.scope - libcontainer container d55d4b030749bee0b74929d92a0ae64211709c603602bbab9d4a988deabc60c9. Jul 10 00:05:31.799444 containerd[1990]: time="2025-07-10T00:05:31.799259441Z" level=info msg="StartContainer for \"d55d4b030749bee0b74929d92a0ae64211709c603602bbab9d4a988deabc60c9\" returns successfully" Jul 10 00:05:32.632940 kubelet[3298]: I0710 00:05:32.632872 3298 scope.go:117] "RemoveContainer" containerID="6976546e0ce54b4ac0edf95fb7114da9648ec996afecac809a041f9c2cac9156" Jul 10 00:05:32.637259 containerd[1990]: time="2025-07-10T00:05:32.637184465Z" level=info msg="CreateContainer within sandbox \"41aa6d0b80e4fb4c536cec501da389331cdef1a7e8e74eb68aef0b35480302b4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 10 00:05:32.657498 containerd[1990]: time="2025-07-10T00:05:32.655923785Z" level=info msg="Container 26086a374ff6981fa467c5ec8108b982316df3a81c7f9e3d943e96bcab5c539a: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:05:32.675927 containerd[1990]: time="2025-07-10T00:05:32.675757530Z" level=info msg="CreateContainer within sandbox \"41aa6d0b80e4fb4c536cec501da389331cdef1a7e8e74eb68aef0b35480302b4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"26086a374ff6981fa467c5ec8108b982316df3a81c7f9e3d943e96bcab5c539a\"" Jul 10 00:05:32.676709 containerd[1990]: time="2025-07-10T00:05:32.676668534Z" level=info msg="StartContainer for \"26086a374ff6981fa467c5ec8108b982316df3a81c7f9e3d943e96bcab5c539a\"" Jul 10 00:05:32.679002 containerd[1990]: time="2025-07-10T00:05:32.678883506Z" level=info msg="connecting to shim 26086a374ff6981fa467c5ec8108b982316df3a81c7f9e3d943e96bcab5c539a" address="unix:///run/containerd/s/5cefcc594113ecaf838fdfd5d2c79e16d0b920675e1e1f61f2629573ef1933a3" protocol=ttrpc version=3 Jul 10 00:05:32.720691 systemd[1]: Started cri-containerd-26086a374ff6981fa467c5ec8108b982316df3a81c7f9e3d943e96bcab5c539a.scope - libcontainer container 26086a374ff6981fa467c5ec8108b982316df3a81c7f9e3d943e96bcab5c539a. Jul 10 00:05:32.804501 containerd[1990]: time="2025-07-10T00:05:32.804376650Z" level=info msg="StartContainer for \"26086a374ff6981fa467c5ec8108b982316df3a81c7f9e3d943e96bcab5c539a\" returns successfully" Jul 10 00:05:33.277735 containerd[1990]: time="2025-07-10T00:05:33.277678565Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d44c2331491e323238a69f3d081c3021ef29bc89c5d21c900a8b8a9586ef4a7c\" id:\"68bba3152e65d3e4dd01198fa5237501a408d7f4b0a2df1f8e2dcfd56c356022\" pid:6534 exited_at:{seconds:1752105933 nanos:275312897}" Jul 10 00:05:33.776563 kubelet[3298]: E0710 00:05:33.776489 3298 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-230?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 10 00:05:35.201975 systemd[1]: cri-containerd-3fd7bc1d507cf8ba7348b8347f68f2bbe0b09c6beaf9652c47ed2e90e319d347.scope: Deactivated successfully. Jul 10 00:05:35.202591 systemd[1]: cri-containerd-3fd7bc1d507cf8ba7348b8347f68f2bbe0b09c6beaf9652c47ed2e90e319d347.scope: Consumed 4.199s CPU time, 21.3M memory peak, 300K read from disk. Jul 10 00:05:35.209118 containerd[1990]: time="2025-07-10T00:05:35.209027046Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3fd7bc1d507cf8ba7348b8347f68f2bbe0b09c6beaf9652c47ed2e90e319d347\" id:\"3fd7bc1d507cf8ba7348b8347f68f2bbe0b09c6beaf9652c47ed2e90e319d347\" pid:3144 exit_status:1 exited_at:{seconds:1752105935 nanos:207896430}" Jul 10 00:05:35.210757 containerd[1990]: time="2025-07-10T00:05:35.209079150Z" level=info msg="received exit event container_id:\"3fd7bc1d507cf8ba7348b8347f68f2bbe0b09c6beaf9652c47ed2e90e319d347\" id:\"3fd7bc1d507cf8ba7348b8347f68f2bbe0b09c6beaf9652c47ed2e90e319d347\" pid:3144 exit_status:1 exited_at:{seconds:1752105935 nanos:207896430}" Jul 10 00:05:35.255835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fd7bc1d507cf8ba7348b8347f68f2bbe0b09c6beaf9652c47ed2e90e319d347-rootfs.mount: Deactivated successfully. Jul 10 00:05:35.670414 kubelet[3298]: I0710 00:05:35.670117 3298 scope.go:117] "RemoveContainer" containerID="3fd7bc1d507cf8ba7348b8347f68f2bbe0b09c6beaf9652c47ed2e90e319d347" Jul 10 00:05:35.674122 containerd[1990]: time="2025-07-10T00:05:35.674038964Z" level=info msg="CreateContainer within sandbox \"983f5205c8893bf0d9578678f4d7fc9ef9ffbb6f51ca1039f1c3926661e984d0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 10 00:05:35.701416 containerd[1990]: time="2025-07-10T00:05:35.699583605Z" level=info msg="Container e5fa3f2dc6f5da4cb3eba007e09c229e3eb2f02717543d6186823af58cb8bc71: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:05:35.724533 containerd[1990]: time="2025-07-10T00:05:35.724481649Z" level=info msg="CreateContainer within sandbox \"983f5205c8893bf0d9578678f4d7fc9ef9ffbb6f51ca1039f1c3926661e984d0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e5fa3f2dc6f5da4cb3eba007e09c229e3eb2f02717543d6186823af58cb8bc71\"" Jul 10 00:05:35.725370 containerd[1990]: time="2025-07-10T00:05:35.725330577Z" level=info msg="StartContainer for \"e5fa3f2dc6f5da4cb3eba007e09c229e3eb2f02717543d6186823af58cb8bc71\"" Jul 10 00:05:35.727572 containerd[1990]: time="2025-07-10T00:05:35.727523085Z" level=info msg="connecting to shim e5fa3f2dc6f5da4cb3eba007e09c229e3eb2f02717543d6186823af58cb8bc71" address="unix:///run/containerd/s/70deeddd152636232986c22ba98e94708562a297fac5fd5c31ee33a2c6435d39" protocol=ttrpc version=3 Jul 10 00:05:35.765704 systemd[1]: Started cri-containerd-e5fa3f2dc6f5da4cb3eba007e09c229e3eb2f02717543d6186823af58cb8bc71.scope - libcontainer container e5fa3f2dc6f5da4cb3eba007e09c229e3eb2f02717543d6186823af58cb8bc71. Jul 10 00:05:35.859493 containerd[1990]: time="2025-07-10T00:05:35.859424097Z" level=info msg="StartContainer for \"e5fa3f2dc6f5da4cb3eba007e09c229e3eb2f02717543d6186823af58cb8bc71\" returns successfully" Jul 10 00:05:38.369710 containerd[1990]: time="2025-07-10T00:05:38.369645298Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d44c2331491e323238a69f3d081c3021ef29bc89c5d21c900a8b8a9586ef4a7c\" id:\"4797d1d7d73ee7ba6132f113db5cd48a3b9c559dff1025ad8289b11928238cc0\" pid:6604 exited_at:{seconds:1752105938 nanos:369250558}" Jul 10 00:05:43.359559 systemd[1]: cri-containerd-d55d4b030749bee0b74929d92a0ae64211709c603602bbab9d4a988deabc60c9.scope: Deactivated successfully. Jul 10 00:05:43.362319 containerd[1990]: time="2025-07-10T00:05:43.361108203Z" level=info msg="received exit event container_id:\"d55d4b030749bee0b74929d92a0ae64211709c603602bbab9d4a988deabc60c9\" id:\"d55d4b030749bee0b74929d92a0ae64211709c603602bbab9d4a988deabc60c9\" pid:6473 exit_status:1 exited_at:{seconds:1752105943 nanos:359746071}" Jul 10 00:05:43.362319 containerd[1990]: time="2025-07-10T00:05:43.361524399Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d55d4b030749bee0b74929d92a0ae64211709c603602bbab9d4a988deabc60c9\" id:\"d55d4b030749bee0b74929d92a0ae64211709c603602bbab9d4a988deabc60c9\" pid:6473 exit_status:1 exited_at:{seconds:1752105943 nanos:359746071}" Jul 10 00:05:43.361673 systemd[1]: cri-containerd-d55d4b030749bee0b74929d92a0ae64211709c603602bbab9d4a988deabc60c9.scope: Consumed 461ms CPU time, 33.7M memory peak, 1.1M read from disk. Jul 10 00:05:43.402214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d55d4b030749bee0b74929d92a0ae64211709c603602bbab9d4a988deabc60c9-rootfs.mount: Deactivated successfully. Jul 10 00:05:43.707935 kubelet[3298]: I0710 00:05:43.707139 3298 scope.go:117] "RemoveContainer" containerID="76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560" Jul 10 00:05:43.709379 kubelet[3298]: I0710 00:05:43.708508 3298 scope.go:117] "RemoveContainer" containerID="d55d4b030749bee0b74929d92a0ae64211709c603602bbab9d4a988deabc60c9" Jul 10 00:05:43.709379 kubelet[3298]: E0710 00:05:43.709102 3298 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-2q5pk_tigera-operator(0774a03e-32af-4f52-8806-dbc380e98322)\"" pod="tigera-operator/tigera-operator-747864d56d-2q5pk" podUID="0774a03e-32af-4f52-8806-dbc380e98322" Jul 10 00:05:43.711616 containerd[1990]: time="2025-07-10T00:05:43.711539752Z" level=info msg="RemoveContainer for \"76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560\"" Jul 10 00:05:43.720913 containerd[1990]: time="2025-07-10T00:05:43.720754576Z" level=info msg="RemoveContainer for \"76e1366cd26d03a5842dd4033e0bc36230c3a39d5ae21358e542c92b7dbcb560\" returns successfully" Jul 10 00:05:43.777239 kubelet[3298]: E0710 00:05:43.776820 3298 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-230?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 10 00:05:48.375072 containerd[1990]: time="2025-07-10T00:05:48.374971532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0a5ca889c6c1d2329ef304f8ada1b7e67940c71a211a573276dd2d148a78dc1\" id:\"2871048eb01b0ce648ae0334c4ae8c8eddb6dd48107455eb7d1834a17e37f2e9\" pid:6640 exit_status:1 exited_at:{seconds:1752105948 nanos:374582264}"