Jul 6 23:24:31.141607 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 6 23:24:31.144456 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:52:18 -00 2025 Jul 6 23:24:31.144485 kernel: KASLR disabled due to lack of seed Jul 6 23:24:31.144502 kernel: efi: EFI v2.7 by EDK II Jul 6 23:24:31.144517 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Jul 6 23:24:31.144532 kernel: secureboot: Secure boot disabled Jul 6 23:24:31.144549 kernel: ACPI: Early table checksum verification disabled Jul 6 23:24:31.144565 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 6 23:24:31.144581 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 6 23:24:31.144596 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 6 23:24:31.144611 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 6 23:24:31.146406 kernel: ACPI: FACS 0x0000000078630000 000040 Jul 6 23:24:31.146434 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 6 23:24:31.146450 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 6 23:24:31.146468 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 6 23:24:31.146484 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 6 23:24:31.146509 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 6 23:24:31.146525 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 6 23:24:31.146541 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 6 23:24:31.146557 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 6 23:24:31.146573 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 6 23:24:31.146588 kernel: printk: legacy bootconsole [uart0] enabled Jul 6 23:24:31.146604 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 6 23:24:31.146665 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 6 23:24:31.146689 kernel: NODE_DATA(0) allocated [mem 0x4b584cdc0-0x4b5853fff] Jul 6 23:24:31.146705 kernel: Zone ranges: Jul 6 23:24:31.146721 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 6 23:24:31.146743 kernel: DMA32 empty Jul 6 23:24:31.146759 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 6 23:24:31.146775 kernel: Device empty Jul 6 23:24:31.146791 kernel: Movable zone start for each node Jul 6 23:24:31.146806 kernel: Early memory node ranges Jul 6 23:24:31.146821 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 6 23:24:31.146837 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 6 23:24:31.146852 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 6 23:24:31.146868 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 6 23:24:31.146883 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 6 23:24:31.146899 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 6 23:24:31.146914 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 6 23:24:31.146934 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 6 23:24:31.146957 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 6 23:24:31.146977 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 6 23:24:31.146993 kernel: psci: probing for conduit method from ACPI. Jul 6 23:24:31.147010 kernel: psci: PSCIv1.0 detected in firmware. Jul 6 23:24:31.147031 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:24:31.147047 kernel: psci: Trusted OS migration not required Jul 6 23:24:31.147064 kernel: psci: SMC Calling Convention v1.1 Jul 6 23:24:31.147081 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 6 23:24:31.147097 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 6 23:24:31.147114 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 6 23:24:31.147131 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 6 23:24:31.147147 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:24:31.147165 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:24:31.147181 kernel: CPU features: detected: Spectre-v2 Jul 6 23:24:31.147198 kernel: CPU features: detected: Spectre-v3a Jul 6 23:24:31.147219 kernel: CPU features: detected: Spectre-BHB Jul 6 23:24:31.147236 kernel: CPU features: detected: ARM erratum 1742098 Jul 6 23:24:31.147253 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 6 23:24:31.147269 kernel: alternatives: applying boot alternatives Jul 6 23:24:31.147289 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=dd2d39de40482a23e9bb75390ff5ca85cd9bd34d902b8049121a8373f8cb2ef2 Jul 6 23:24:31.147306 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:24:31.147323 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:24:31.147340 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:24:31.147357 kernel: Fallback order for Node 0: 0 Jul 6 23:24:31.147373 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jul 6 23:24:31.147395 kernel: Policy zone: Normal Jul 6 23:24:31.147412 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:24:31.147428 kernel: software IO TLB: area num 2. Jul 6 23:24:31.147445 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 6 23:24:31.147462 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:24:31.147479 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:24:31.147497 kernel: rcu: RCU event tracing is enabled. Jul 6 23:24:31.147515 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:24:31.147531 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:24:31.147548 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:24:31.147565 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:24:31.147581 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:24:31.147602 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:24:31.147723 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:24:31.147744 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:24:31.147761 kernel: GICv3: 96 SPIs implemented Jul 6 23:24:31.147777 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:24:31.147794 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:24:31.147810 kernel: GICv3: GICv3 features: 16 PPIs Jul 6 23:24:31.147827 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 6 23:24:31.147844 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 6 23:24:31.147860 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 6 23:24:31.147877 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:24:31.147894 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jul 6 23:24:31.147918 kernel: GICv3: using LPI property table @0x0000000400110000 Jul 6 23:24:31.147935 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 6 23:24:31.147952 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jul 6 23:24:31.147968 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:24:31.147985 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 6 23:24:31.148001 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 6 23:24:31.148018 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 6 23:24:31.148034 kernel: Console: colour dummy device 80x25 Jul 6 23:24:31.148052 kernel: printk: legacy console [tty1] enabled Jul 6 23:24:31.148069 kernel: ACPI: Core revision 20240827 Jul 6 23:24:31.148090 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 6 23:24:31.148108 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:24:31.148124 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 6 23:24:31.148141 kernel: landlock: Up and running. Jul 6 23:24:31.148157 kernel: SELinux: Initializing. Jul 6 23:24:31.148174 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:24:31.148191 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:24:31.148208 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:24:31.148225 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:24:31.148246 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 6 23:24:31.148263 kernel: Remapping and enabling EFI services. Jul 6 23:24:31.148280 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:24:31.148296 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:24:31.148313 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 6 23:24:31.148330 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jul 6 23:24:31.148347 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 6 23:24:31.148363 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:24:31.148380 kernel: SMP: Total of 2 processors activated. Jul 6 23:24:31.148400 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:24:31.148428 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:24:31.148446 kernel: CPU features: detected: 32-bit EL1 Support Jul 6 23:24:31.148467 kernel: CPU features: detected: CRC32 instructions Jul 6 23:24:31.148484 kernel: alternatives: applying system-wide alternatives Jul 6 23:24:31.148502 kernel: Memory: 3813092K/4030464K available (11072K kernel code, 2428K rwdata, 9032K rodata, 39424K init, 1035K bss, 212412K reserved, 0K cma-reserved) Jul 6 23:24:31.148520 kernel: devtmpfs: initialized Jul 6 23:24:31.148538 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:24:31.148560 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:24:31.148577 kernel: 16960 pages in range for non-PLT usage Jul 6 23:24:31.148595 kernel: 508480 pages in range for PLT usage Jul 6 23:24:31.148612 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:24:31.148674 kernel: SMBIOS 3.0.0 present. Jul 6 23:24:31.148695 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 6 23:24:31.148713 kernel: DMI: Memory slots populated: 0/0 Jul 6 23:24:31.148730 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:24:31.148748 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:24:31.148772 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:24:31.148790 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:24:31.148807 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:24:31.148825 kernel: audit: type=2000 audit(0.227:1): state=initialized audit_enabled=0 res=1 Jul 6 23:24:31.148842 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:24:31.148860 kernel: cpuidle: using governor menu Jul 6 23:24:31.148878 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:24:31.148895 kernel: ASID allocator initialised with 65536 entries Jul 6 23:24:31.148913 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:24:31.148941 kernel: Serial: AMBA PL011 UART driver Jul 6 23:24:31.148959 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:24:31.148976 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:24:31.148993 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:24:31.149011 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:24:31.149029 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:24:31.149046 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:24:31.149064 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:24:31.149082 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:24:31.149103 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:24:31.149121 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:24:31.149138 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:24:31.149156 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:24:31.149173 kernel: ACPI: Interpreter enabled Jul 6 23:24:31.149191 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:24:31.149208 kernel: ACPI: MCFG table detected, 1 entries Jul 6 23:24:31.149226 kernel: ACPI: CPU0 has been hot-added Jul 6 23:24:31.149243 kernel: ACPI: CPU1 has been hot-added Jul 6 23:24:31.149264 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 6 23:24:31.149550 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:24:31.153575 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 6 23:24:31.153824 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 6 23:24:31.154011 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 6 23:24:31.154195 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 6 23:24:31.154220 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 6 23:24:31.154247 kernel: acpiphp: Slot [1] registered Jul 6 23:24:31.154266 kernel: acpiphp: Slot [2] registered Jul 6 23:24:31.154284 kernel: acpiphp: Slot [3] registered Jul 6 23:24:31.154301 kernel: acpiphp: Slot [4] registered Jul 6 23:24:31.154319 kernel: acpiphp: Slot [5] registered Jul 6 23:24:31.154337 kernel: acpiphp: Slot [6] registered Jul 6 23:24:31.154354 kernel: acpiphp: Slot [7] registered Jul 6 23:24:31.154372 kernel: acpiphp: Slot [8] registered Jul 6 23:24:31.154389 kernel: acpiphp: Slot [9] registered Jul 6 23:24:31.154406 kernel: acpiphp: Slot [10] registered Jul 6 23:24:31.154428 kernel: acpiphp: Slot [11] registered Jul 6 23:24:31.154446 kernel: acpiphp: Slot [12] registered Jul 6 23:24:31.154464 kernel: acpiphp: Slot [13] registered Jul 6 23:24:31.154482 kernel: acpiphp: Slot [14] registered Jul 6 23:24:31.154499 kernel: acpiphp: Slot [15] registered Jul 6 23:24:31.154517 kernel: acpiphp: Slot [16] registered Jul 6 23:24:31.154534 kernel: acpiphp: Slot [17] registered Jul 6 23:24:31.154552 kernel: acpiphp: Slot [18] registered Jul 6 23:24:31.154570 kernel: acpiphp: Slot [19] registered Jul 6 23:24:31.154592 kernel: acpiphp: Slot [20] registered Jul 6 23:24:31.154610 kernel: acpiphp: Slot [21] registered Jul 6 23:24:31.154678 kernel: acpiphp: Slot [22] registered Jul 6 23:24:31.154703 kernel: acpiphp: Slot [23] registered Jul 6 23:24:31.154721 kernel: acpiphp: Slot [24] registered Jul 6 23:24:31.154739 kernel: acpiphp: Slot [25] registered Jul 6 23:24:31.154756 kernel: acpiphp: Slot [26] registered Jul 6 23:24:31.154774 kernel: acpiphp: Slot [27] registered Jul 6 23:24:31.154791 kernel: acpiphp: Slot [28] registered Jul 6 23:24:31.154815 kernel: acpiphp: Slot [29] registered Jul 6 23:24:31.154833 kernel: acpiphp: Slot [30] registered Jul 6 23:24:31.154850 kernel: acpiphp: Slot [31] registered Jul 6 23:24:31.154867 kernel: PCI host bridge to bus 0000:00 Jul 6 23:24:31.155078 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 6 23:24:31.155248 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 6 23:24:31.155414 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 6 23:24:31.155579 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 6 23:24:31.158267 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jul 6 23:24:31.158548 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jul 6 23:24:31.158812 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jul 6 23:24:31.159029 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jul 6 23:24:31.159224 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jul 6 23:24:31.159414 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:24:31.159658 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jul 6 23:24:31.159869 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jul 6 23:24:31.160061 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jul 6 23:24:31.160269 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jul 6 23:24:31.160469 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:24:31.160726 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Jul 6 23:24:31.160931 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Jul 6 23:24:31.161134 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Jul 6 23:24:31.161334 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Jul 6 23:24:31.161528 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Jul 6 23:24:31.161997 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 6 23:24:31.162189 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 6 23:24:31.162363 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 6 23:24:31.162388 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 6 23:24:31.162416 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 6 23:24:31.162435 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 6 23:24:31.162453 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 6 23:24:31.162471 kernel: iommu: Default domain type: Translated Jul 6 23:24:31.162489 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:24:31.162523 kernel: efivars: Registered efivars operations Jul 6 23:24:31.162546 kernel: vgaarb: loaded Jul 6 23:24:31.162565 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:24:31.162582 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:24:31.162606 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:24:31.162671 kernel: pnp: PnP ACPI init Jul 6 23:24:31.163024 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 6 23:24:31.163738 kernel: pnp: PnP ACPI: found 1 devices Jul 6 23:24:31.164101 kernel: NET: Registered PF_INET protocol family Jul 6 23:24:31.164369 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:24:31.164390 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:24:31.164410 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:24:31.164428 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:24:31.164454 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:24:31.164472 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:24:31.164489 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:24:31.164507 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:24:31.164524 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:24:31.164542 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:24:31.164560 kernel: kvm [1]: HYP mode not available Jul 6 23:24:31.164577 kernel: Initialise system trusted keyrings Jul 6 23:24:31.164595 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:24:31.165127 kernel: Key type asymmetric registered Jul 6 23:24:31.165153 kernel: Asymmetric key parser 'x509' registered Jul 6 23:24:31.165171 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 6 23:24:31.165190 kernel: io scheduler mq-deadline registered Jul 6 23:24:31.165208 kernel: io scheduler kyber registered Jul 6 23:24:31.165226 kernel: io scheduler bfq registered Jul 6 23:24:31.165450 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 6 23:24:31.165478 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 6 23:24:31.165503 kernel: ACPI: button: Power Button [PWRB] Jul 6 23:24:31.165521 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 6 23:24:31.165539 kernel: ACPI: button: Sleep Button [SLPB] Jul 6 23:24:31.165556 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:24:31.165575 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 6 23:24:31.165803 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 6 23:24:31.165830 kernel: printk: legacy console [ttyS0] disabled Jul 6 23:24:31.165849 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 6 23:24:31.165867 kernel: printk: legacy console [ttyS0] enabled Jul 6 23:24:31.165890 kernel: printk: legacy bootconsole [uart0] disabled Jul 6 23:24:31.165908 kernel: thunder_xcv, ver 1.0 Jul 6 23:24:31.165925 kernel: thunder_bgx, ver 1.0 Jul 6 23:24:31.165943 kernel: nicpf, ver 1.0 Jul 6 23:24:31.165960 kernel: nicvf, ver 1.0 Jul 6 23:24:31.166154 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:24:31.166332 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:24:30 UTC (1751844270) Jul 6 23:24:31.166357 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:24:31.166380 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jul 6 23:24:31.166398 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:24:31.166416 kernel: watchdog: NMI not fully supported Jul 6 23:24:31.166433 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:24:31.166450 kernel: Segment Routing with IPv6 Jul 6 23:24:31.166468 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:24:31.166485 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:24:31.166503 kernel: Key type dns_resolver registered Jul 6 23:24:31.166520 kernel: registered taskstats version 1 Jul 6 23:24:31.166542 kernel: Loading compiled-in X.509 certificates Jul 6 23:24:31.166560 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 90fb300ebe1fa0773739bb35dad461c5679d8dfb' Jul 6 23:24:31.166577 kernel: Demotion targets for Node 0: null Jul 6 23:24:31.166595 kernel: Key type .fscrypt registered Jul 6 23:24:31.166612 kernel: Key type fscrypt-provisioning registered Jul 6 23:24:31.166651 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:24:31.166670 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:24:31.166688 kernel: ima: No architecture policies found Jul 6 23:24:31.166705 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:24:31.166728 kernel: clk: Disabling unused clocks Jul 6 23:24:31.166746 kernel: PM: genpd: Disabling unused power domains Jul 6 23:24:31.166764 kernel: Warning: unable to open an initial console. Jul 6 23:24:31.166781 kernel: Freeing unused kernel memory: 39424K Jul 6 23:24:31.166799 kernel: Run /init as init process Jul 6 23:24:31.166816 kernel: with arguments: Jul 6 23:24:31.166833 kernel: /init Jul 6 23:24:31.166850 kernel: with environment: Jul 6 23:24:31.166867 kernel: HOME=/ Jul 6 23:24:31.166888 kernel: TERM=linux Jul 6 23:24:31.166906 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:24:31.166925 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:24:31.166949 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:24:31.166969 systemd[1]: Detected virtualization amazon. Jul 6 23:24:31.166988 systemd[1]: Detected architecture arm64. Jul 6 23:24:31.167006 systemd[1]: Running in initrd. Jul 6 23:24:31.167025 systemd[1]: No hostname configured, using default hostname. Jul 6 23:24:31.167049 systemd[1]: Hostname set to . Jul 6 23:24:31.167068 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:24:31.167087 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:24:31.167106 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:24:31.167125 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:24:31.167145 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:24:31.167164 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:24:31.167184 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:24:31.167209 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:24:31.167230 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:24:31.167250 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:24:31.167269 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:24:31.167288 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:24:31.167307 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:24:31.167330 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:24:31.167350 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:24:31.167369 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:24:31.167388 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:24:31.167408 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:24:31.167427 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:24:31.167447 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:24:31.167466 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:24:31.167485 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:24:31.167508 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:24:31.167527 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:24:31.167547 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:24:31.167566 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:24:31.167585 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:24:31.167605 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 6 23:24:31.167644 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:24:31.167666 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:24:31.167691 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:24:31.167711 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:24:31.167731 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:24:31.167751 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:24:31.167771 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:24:31.167795 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:24:31.167814 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:24:31.167832 kernel: Bridge firewalling registered Jul 6 23:24:31.167851 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:24:31.167904 systemd-journald[258]: Collecting audit messages is disabled. Jul 6 23:24:31.167962 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:24:31.167987 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:24:31.168008 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:24:31.168028 systemd-journald[258]: Journal started Jul 6 23:24:31.168067 systemd-journald[258]: Runtime Journal (/run/log/journal/ec244dc7223ca47430f1a1b86f363917) is 8M, max 75.3M, 67.3M free. Jul 6 23:24:31.087497 systemd-modules-load[259]: Inserted module 'overlay' Jul 6 23:24:31.172636 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:24:31.121142 systemd-modules-load[259]: Inserted module 'br_netfilter' Jul 6 23:24:31.188898 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:24:31.190041 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:24:31.195189 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:24:31.207336 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:24:31.229009 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:24:31.241740 systemd-tmpfiles[283]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 6 23:24:31.253270 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:24:31.262990 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:24:31.270867 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:24:31.279330 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:24:31.316070 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=dd2d39de40482a23e9bb75390ff5ca85cd9bd34d902b8049121a8373f8cb2ef2 Jul 6 23:24:31.365517 systemd-resolved[298]: Positive Trust Anchors: Jul 6 23:24:31.365543 systemd-resolved[298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:24:31.365603 systemd-resolved[298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:24:31.481661 kernel: SCSI subsystem initialized Jul 6 23:24:31.489659 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:24:31.502827 kernel: iscsi: registered transport (tcp) Jul 6 23:24:31.524204 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:24:31.524278 kernel: QLogic iSCSI HBA Driver Jul 6 23:24:31.556827 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:24:31.593479 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:24:31.602028 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:24:31.654694 kernel: random: crng init done Jul 6 23:24:31.655331 systemd-resolved[298]: Defaulting to hostname 'linux'. Jul 6 23:24:31.659082 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:24:31.661807 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:24:31.694169 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:24:31.700417 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:24:31.797670 kernel: raid6: neonx8 gen() 6493 MB/s Jul 6 23:24:31.814653 kernel: raid6: neonx4 gen() 6509 MB/s Jul 6 23:24:31.831651 kernel: raid6: neonx2 gen() 5433 MB/s Jul 6 23:24:31.848651 kernel: raid6: neonx1 gen() 3943 MB/s Jul 6 23:24:31.865657 kernel: raid6: int64x8 gen() 3635 MB/s Jul 6 23:24:31.882652 kernel: raid6: int64x4 gen() 3720 MB/s Jul 6 23:24:31.899652 kernel: raid6: int64x2 gen() 3599 MB/s Jul 6 23:24:31.917658 kernel: raid6: int64x1 gen() 2774 MB/s Jul 6 23:24:31.917690 kernel: raid6: using algorithm neonx4 gen() 6509 MB/s Jul 6 23:24:31.936653 kernel: raid6: .... xor() 4885 MB/s, rmw enabled Jul 6 23:24:31.936688 kernel: raid6: using neon recovery algorithm Jul 6 23:24:31.945261 kernel: xor: measuring software checksum speed Jul 6 23:24:31.945312 kernel: 8regs : 12934 MB/sec Jul 6 23:24:31.946455 kernel: 32regs : 13044 MB/sec Jul 6 23:24:31.948834 kernel: arm64_neon : 8448 MB/sec Jul 6 23:24:31.948867 kernel: xor: using function: 32regs (13044 MB/sec) Jul 6 23:24:32.039666 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:24:32.052681 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:24:32.059733 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:24:32.109520 systemd-udevd[509]: Using default interface naming scheme 'v255'. Jul 6 23:24:32.121505 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:24:32.128253 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:24:32.165885 dracut-pre-trigger[515]: rd.md=0: removing MD RAID activation Jul 6 23:24:32.210361 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:24:32.217804 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:24:32.349902 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:24:32.360848 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:24:32.513778 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 6 23:24:32.513845 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 6 23:24:32.519433 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 6 23:24:32.522116 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 6 23:24:32.522442 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 6 23:24:32.527265 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 6 23:24:32.531652 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 6 23:24:32.540754 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:c5:b9:3e:ab:bd Jul 6 23:24:32.549643 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:24:32.549706 kernel: GPT:9289727 != 16777215 Jul 6 23:24:32.549731 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:24:32.550766 kernel: GPT:9289727 != 16777215 Jul 6 23:24:32.550800 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:24:32.552382 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:24:32.556637 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:24:32.552639 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:24:32.559422 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:24:32.566406 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:24:32.573816 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:24:32.579155 (udev-worker)[561]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:24:32.620715 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:24:32.635664 kernel: nvme nvme0: using unchecked data buffer Jul 6 23:24:32.767139 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 6 23:24:32.797324 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 6 23:24:32.804341 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 6 23:24:32.830917 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:24:32.855606 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 6 23:24:32.901033 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 6 23:24:32.901565 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:24:32.902314 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:24:32.903030 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:24:32.910835 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:24:32.920988 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:24:32.949255 disk-uuid[687]: Primary Header is updated. Jul 6 23:24:32.949255 disk-uuid[687]: Secondary Entries is updated. Jul 6 23:24:32.949255 disk-uuid[687]: Secondary Header is updated. Jul 6 23:24:32.961654 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:24:32.967394 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:24:33.989719 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:24:33.991280 disk-uuid[689]: The operation has completed successfully. Jul 6 23:24:34.177208 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:24:34.179672 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:24:34.260104 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:24:34.280102 sh[956]: Success Jul 6 23:24:34.309361 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:24:34.309438 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:24:34.311650 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 6 23:24:34.323675 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 6 23:24:34.445814 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:24:34.453523 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:24:34.477128 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:24:34.508982 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 6 23:24:34.509055 kernel: BTRFS: device fsid aa7ffdf7-f152-4ceb-bd0e-b3b3f8f8b296 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (979) Jul 6 23:24:34.514152 kernel: BTRFS info (device dm-0): first mount of filesystem aa7ffdf7-f152-4ceb-bd0e-b3b3f8f8b296 Jul 6 23:24:34.514218 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:24:34.515411 kernel: BTRFS info (device dm-0): using free-space-tree Jul 6 23:24:34.567152 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:24:34.571400 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:24:34.576340 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:24:34.581665 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:24:34.589314 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:24:34.650668 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1010) Jul 6 23:24:34.655266 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:24:34.655350 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:24:34.655377 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 6 23:24:34.681716 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:24:34.683593 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:24:34.690854 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:24:34.775673 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:24:34.784398 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:24:34.878041 systemd-networkd[1149]: lo: Link UP Jul 6 23:24:34.878062 systemd-networkd[1149]: lo: Gained carrier Jul 6 23:24:34.883181 systemd-networkd[1149]: Enumeration completed Jul 6 23:24:34.883820 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:24:34.884781 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:24:34.884787 systemd-networkd[1149]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:24:34.888659 systemd[1]: Reached target network.target - Network. Jul 6 23:24:34.909825 systemd-networkd[1149]: eth0: Link UP Jul 6 23:24:34.909844 systemd-networkd[1149]: eth0: Gained carrier Jul 6 23:24:34.909865 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:24:34.929741 systemd-networkd[1149]: eth0: DHCPv4 address 172.31.21.233/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 6 23:24:34.990976 ignition[1082]: Ignition 2.21.0 Jul 6 23:24:34.991005 ignition[1082]: Stage: fetch-offline Jul 6 23:24:34.991863 ignition[1082]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:24:34.996180 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:24:34.991887 ignition[1082]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:24:34.993021 ignition[1082]: Ignition finished successfully Jul 6 23:24:35.011845 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:24:35.053222 ignition[1161]: Ignition 2.21.0 Jul 6 23:24:35.053253 ignition[1161]: Stage: fetch Jul 6 23:24:35.053817 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:24:35.053842 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:24:35.054139 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:24:35.068157 ignition[1161]: PUT result: OK Jul 6 23:24:35.072311 ignition[1161]: parsed url from cmdline: "" Jul 6 23:24:35.072326 ignition[1161]: no config URL provided Jul 6 23:24:35.072344 ignition[1161]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:24:35.072368 ignition[1161]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:24:35.072398 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:24:35.075326 ignition[1161]: PUT result: OK Jul 6 23:24:35.075415 ignition[1161]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 6 23:24:35.081363 ignition[1161]: GET result: OK Jul 6 23:24:35.081593 ignition[1161]: parsing config with SHA512: 29c37c268dc05d55974bc7a9b3984bdb8b3337eebb1716c74ba95b566d60dd2630d61c14f748d281dcde1e37cd5193a68af7a0d45844a209bd109fdbe8149ae6 Jul 6 23:24:35.098440 unknown[1161]: fetched base config from "system" Jul 6 23:24:35.098705 unknown[1161]: fetched base config from "system" Jul 6 23:24:35.099373 ignition[1161]: fetch: fetch complete Jul 6 23:24:35.098720 unknown[1161]: fetched user config from "aws" Jul 6 23:24:35.099385 ignition[1161]: fetch: fetch passed Jul 6 23:24:35.099493 ignition[1161]: Ignition finished successfully Jul 6 23:24:35.114678 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:24:35.121885 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:24:35.174964 ignition[1168]: Ignition 2.21.0 Jul 6 23:24:35.175469 ignition[1168]: Stage: kargs Jul 6 23:24:35.176047 ignition[1168]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:24:35.176072 ignition[1168]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:24:35.176237 ignition[1168]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:24:35.183320 ignition[1168]: PUT result: OK Jul 6 23:24:35.194362 ignition[1168]: kargs: kargs passed Jul 6 23:24:35.194503 ignition[1168]: Ignition finished successfully Jul 6 23:24:35.198078 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:24:35.205470 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:24:35.245133 ignition[1174]: Ignition 2.21.0 Jul 6 23:24:35.246849 ignition[1174]: Stage: disks Jul 6 23:24:35.247486 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:24:35.247508 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:24:35.247702 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:24:35.251340 ignition[1174]: PUT result: OK Jul 6 23:24:35.260276 ignition[1174]: disks: disks passed Jul 6 23:24:35.260381 ignition[1174]: Ignition finished successfully Jul 6 23:24:35.264722 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:24:35.269949 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:24:35.272560 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:24:35.275663 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:24:35.285344 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:24:35.287563 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:24:35.292541 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:24:35.357342 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 6 23:24:35.362673 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:24:35.370199 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:24:35.503649 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a6b10247-fbe6-4a25-95d9-ddd4b58604ec r/w with ordered data mode. Quota mode: none. Jul 6 23:24:35.504770 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:24:35.508990 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:24:35.513866 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:24:35.524231 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:24:35.528466 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:24:35.528561 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:24:35.530686 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:24:35.551149 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:24:35.557216 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:24:35.581645 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Jul 6 23:24:35.585805 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:24:35.585871 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:24:35.587189 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 6 23:24:35.595575 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:24:35.691701 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:24:35.701691 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:24:35.710485 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:24:35.718063 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:24:35.861361 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:24:35.869456 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:24:35.875141 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:24:35.903948 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:24:35.906589 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:24:35.938054 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:24:35.963222 ignition[1314]: INFO : Ignition 2.21.0 Jul 6 23:24:35.965362 ignition[1314]: INFO : Stage: mount Jul 6 23:24:35.967086 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:24:35.967086 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:24:35.972267 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:24:35.974923 ignition[1314]: INFO : PUT result: OK Jul 6 23:24:35.980397 ignition[1314]: INFO : mount: mount passed Jul 6 23:24:35.982126 ignition[1314]: INFO : Ignition finished successfully Jul 6 23:24:35.988682 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:24:35.992997 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:24:35.999281 systemd-networkd[1149]: eth0: Gained IPv6LL Jul 6 23:24:36.023158 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:24:36.062651 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1325) Jul 6 23:24:36.066776 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:24:36.066817 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:24:36.068097 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 6 23:24:36.077075 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:24:36.114805 ignition[1342]: INFO : Ignition 2.21.0 Jul 6 23:24:36.114805 ignition[1342]: INFO : Stage: files Jul 6 23:24:36.119266 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:24:36.119266 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:24:36.119266 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:24:36.127201 ignition[1342]: INFO : PUT result: OK Jul 6 23:24:36.130737 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:24:36.136067 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:24:36.136067 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:24:36.145816 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:24:36.148996 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:24:36.152545 unknown[1342]: wrote ssh authorized keys file for user: core Jul 6 23:24:36.155500 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:24:36.155500 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:24:36.155500 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 6 23:24:36.238920 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:24:36.370478 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:24:36.374719 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:24:36.378572 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:24:36.382409 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:24:36.386138 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:24:36.386138 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:24:36.393989 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:24:36.397865 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:24:36.401735 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:24:36.410556 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:24:36.414910 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:24:36.414910 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:24:36.424697 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:24:36.424697 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:24:36.424697 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 6 23:24:37.009260 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:24:37.391761 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:24:37.396648 ignition[1342]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:24:37.396648 ignition[1342]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:24:37.403377 ignition[1342]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:24:37.403377 ignition[1342]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:24:37.403377 ignition[1342]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:24:37.403377 ignition[1342]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:24:37.403377 ignition[1342]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:24:37.403377 ignition[1342]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:24:37.403377 ignition[1342]: INFO : files: files passed Jul 6 23:24:37.403377 ignition[1342]: INFO : Ignition finished successfully Jul 6 23:24:37.430709 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:24:37.434557 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:24:37.443963 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:24:37.462304 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:24:37.463284 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:24:37.478530 initrd-setup-root-after-ignition[1371]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:24:37.478530 initrd-setup-root-after-ignition[1371]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:24:37.488049 initrd-setup-root-after-ignition[1375]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:24:37.488954 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:24:37.497979 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:24:37.503228 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:24:37.601700 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:24:37.602233 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:24:37.610007 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:24:37.615328 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:24:37.617944 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:24:37.621279 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:24:37.661350 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:24:37.663735 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:24:37.697553 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:24:37.702890 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:24:37.708380 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:24:37.712404 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:24:37.712662 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:24:37.720114 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:24:37.724523 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:24:37.727423 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:24:37.730440 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:24:37.736736 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:24:37.743661 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:24:37.748527 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:24:37.753505 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:24:37.758926 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:24:37.763340 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:24:37.768212 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:24:37.770238 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:24:37.770480 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:24:37.778428 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:24:37.781232 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:24:37.788169 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:24:37.790802 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:24:37.796340 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:24:37.796564 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:24:37.804078 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:24:37.804898 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:24:37.813267 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:24:37.813683 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:24:37.821226 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:24:37.823325 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:24:37.823565 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:24:37.838182 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:24:37.843788 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:24:37.844359 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:24:37.855704 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:24:37.855946 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:24:37.872602 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:24:37.875289 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:24:37.899916 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:24:37.904701 ignition[1395]: INFO : Ignition 2.21.0 Jul 6 23:24:37.904701 ignition[1395]: INFO : Stage: umount Jul 6 23:24:37.909560 ignition[1395]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:24:37.909560 ignition[1395]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:24:37.909560 ignition[1395]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:24:37.917799 ignition[1395]: INFO : PUT result: OK Jul 6 23:24:37.931901 ignition[1395]: INFO : umount: umount passed Jul 6 23:24:37.934768 ignition[1395]: INFO : Ignition finished successfully Jul 6 23:24:37.936929 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:24:37.937112 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:24:37.941520 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:24:37.941781 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:24:37.942951 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:24:37.943040 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:24:37.943216 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:24:37.943300 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:24:37.943561 systemd[1]: Stopped target network.target - Network. Jul 6 23:24:37.951044 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:24:37.951141 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:24:37.972559 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:24:37.974611 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:24:37.974786 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:24:37.979334 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:24:37.987721 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:24:37.992927 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:24:37.994368 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:24:37.996831 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:24:37.996902 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:24:37.999745 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:24:37.999841 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:24:38.006269 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:24:38.006360 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:24:38.009396 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:24:38.013486 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:24:38.044543 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:24:38.046701 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:24:38.053859 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:24:38.054492 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:24:38.056684 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:24:38.071254 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:24:38.074840 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 6 23:24:38.081872 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:24:38.082103 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:24:38.096342 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:24:38.098918 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:24:38.099024 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:24:38.118297 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:24:38.118417 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:24:38.126224 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:24:38.126441 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:24:38.131106 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:24:38.131192 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:24:38.141489 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:24:38.153473 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:24:38.155872 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:24:38.156925 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:24:38.157116 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:24:38.168327 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:24:38.169052 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:24:38.183201 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:24:38.184078 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:24:38.193374 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:24:38.193511 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:24:38.200236 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:24:38.200720 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:24:38.204819 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:24:38.204911 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:24:38.213162 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:24:38.213263 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:24:38.220188 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:24:38.220274 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:24:38.231249 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:24:38.235561 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 6 23:24:38.235705 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:24:38.244273 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:24:38.244381 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:24:38.255030 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:24:38.255117 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:24:38.265488 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 6 23:24:38.265804 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:24:38.265894 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:24:38.266541 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:24:38.276263 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:24:38.295043 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:24:38.296756 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:24:38.303084 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:24:38.309215 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:24:38.335302 systemd[1]: Switching root. Jul 6 23:24:38.389899 systemd-journald[258]: Journal stopped Jul 6 23:24:40.444936 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Jul 6 23:24:40.445079 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:24:40.445114 kernel: SELinux: policy capability open_perms=1 Jul 6 23:24:40.445144 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:24:40.445173 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:24:40.445202 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:24:40.445229 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:24:40.445257 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:24:40.445286 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:24:40.445322 kernel: SELinux: policy capability userspace_initial_context=0 Jul 6 23:24:40.445355 kernel: audit: type=1403 audit(1751844278.650:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:24:40.445391 systemd[1]: Successfully loaded SELinux policy in 62.088ms. Jul 6 23:24:40.445435 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.599ms. Jul 6 23:24:40.445468 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:24:40.445500 systemd[1]: Detected virtualization amazon. Jul 6 23:24:40.445529 systemd[1]: Detected architecture arm64. Jul 6 23:24:40.445559 systemd[1]: Detected first boot. Jul 6 23:24:40.445613 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:24:40.454751 zram_generator::config[1439]: No configuration found. Jul 6 23:24:40.454796 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:24:40.456760 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:24:40.456813 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:24:40.456845 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:24:40.456878 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:24:40.456908 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:24:40.456938 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:24:40.456967 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:24:40.457002 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:24:40.457033 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:24:40.457065 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:24:40.457096 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:24:40.457126 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:24:40.457156 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:24:40.457186 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:24:40.457217 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:24:40.457247 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:24:40.457281 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:24:40.457310 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:24:40.457343 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:24:40.457370 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:24:40.457401 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:24:40.457433 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:24:40.457463 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:24:40.457496 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:24:40.457526 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:24:40.457555 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:24:40.457604 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:24:40.457660 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:24:40.457693 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:24:40.457723 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:24:40.457756 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:24:40.457784 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:24:40.457819 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:24:40.457849 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:24:40.457877 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:24:40.457905 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:24:40.457935 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:24:40.457965 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:24:40.457996 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:24:40.458026 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:24:40.458054 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:24:40.458088 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:24:40.458117 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:24:40.458146 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:24:40.458174 systemd[1]: Reached target machines.target - Containers. Jul 6 23:24:40.458203 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:24:40.458231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:24:40.458259 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:24:40.458286 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:24:40.458320 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:24:40.458351 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:24:40.458384 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:24:40.458412 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:24:40.458440 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:24:40.458468 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:24:40.458498 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:24:40.458526 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:24:40.458554 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:24:40.458586 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:24:40.466482 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:24:40.466566 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:24:40.466597 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:24:40.466658 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:24:40.466703 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:24:40.466735 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:24:40.466763 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:24:40.466794 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:24:40.466826 systemd[1]: Stopped verity-setup.service. Jul 6 23:24:40.466854 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:24:40.466887 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:24:40.466919 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:24:40.466950 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:24:40.466979 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:24:40.467009 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:24:40.467038 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:24:40.467066 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:24:40.467098 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:24:40.467131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:24:40.467162 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:24:40.467190 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:24:40.467220 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:24:40.467248 kernel: fuse: init (API version 7.41) Jul 6 23:24:40.467276 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:24:40.467304 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:24:40.467333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:24:40.467360 kernel: ACPI: bus type drm_connector registered Jul 6 23:24:40.467392 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:24:40.467420 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:24:40.467447 kernel: loop: module loaded Jul 6 23:24:40.467474 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:24:40.467504 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:24:40.467532 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:24:40.467560 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:24:40.467590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:24:40.467647 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:24:40.469095 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:24:40.469141 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:24:40.469170 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:24:40.469201 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:24:40.469230 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:24:40.469263 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:24:40.469292 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:24:40.469366 systemd-journald[1522]: Collecting audit messages is disabled. Jul 6 23:24:40.469425 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:24:40.469457 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:24:40.469486 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:24:40.469518 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:24:40.469552 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:24:40.469602 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:24:40.469771 systemd-journald[1522]: Journal started Jul 6 23:24:40.469827 systemd-journald[1522]: Runtime Journal (/run/log/journal/ec244dc7223ca47430f1a1b86f363917) is 8M, max 75.3M, 67.3M free. Jul 6 23:24:40.488690 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:24:40.488780 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:24:39.723288 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:24:39.746322 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 6 23:24:39.747154 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:24:40.502516 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:24:40.518269 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:24:40.537786 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:24:40.564808 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:24:40.581741 kernel: loop0: detected capacity change from 0 to 211168 Jul 6 23:24:40.580349 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:24:40.585291 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:24:40.591118 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:24:40.597097 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:24:40.660205 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:24:40.692601 systemd-journald[1522]: Time spent on flushing to /var/log/journal/ec244dc7223ca47430f1a1b86f363917 is 91.674ms for 932 entries. Jul 6 23:24:40.692601 systemd-journald[1522]: System Journal (/var/log/journal/ec244dc7223ca47430f1a1b86f363917) is 8M, max 195.6M, 187.6M free. Jul 6 23:24:40.806249 systemd-journald[1522]: Received client request to flush runtime journal. Jul 6 23:24:40.806330 kernel: loop1: detected capacity change from 0 to 107312 Jul 6 23:24:40.806365 kernel: loop2: detected capacity change from 0 to 61240 Jul 6 23:24:40.725168 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:24:40.751426 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:24:40.812785 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:24:40.833379 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:24:40.843997 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:24:40.886386 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:24:40.918678 kernel: loop3: detected capacity change from 0 to 138376 Jul 6 23:24:40.931477 systemd-tmpfiles[1591]: ACLs are not supported, ignoring. Jul 6 23:24:40.931519 systemd-tmpfiles[1591]: ACLs are not supported, ignoring. Jul 6 23:24:40.948832 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:24:40.982671 kernel: loop4: detected capacity change from 0 to 211168 Jul 6 23:24:41.032466 kernel: loop5: detected capacity change from 0 to 107312 Jul 6 23:24:41.059669 kernel: loop6: detected capacity change from 0 to 61240 Jul 6 23:24:41.081662 kernel: loop7: detected capacity change from 0 to 138376 Jul 6 23:24:41.121992 (sd-merge)[1597]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 6 23:24:41.124059 (sd-merge)[1597]: Merged extensions into '/usr'. Jul 6 23:24:41.133815 systemd[1]: Reload requested from client PID 1555 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:24:41.133849 systemd[1]: Reloading... Jul 6 23:24:41.362737 zram_generator::config[1623]: No configuration found. Jul 6 23:24:41.476602 ldconfig[1548]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:24:41.607247 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:24:41.807135 systemd[1]: Reloading finished in 672 ms. Jul 6 23:24:41.831681 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:24:41.835114 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:24:41.838419 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:24:41.852847 systemd[1]: Starting ensure-sysext.service... Jul 6 23:24:41.857924 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:24:41.866053 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:24:41.898055 systemd[1]: Reload requested from client PID 1676 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:24:41.898080 systemd[1]: Reloading... Jul 6 23:24:41.935852 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 6 23:24:41.935916 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 6 23:24:41.936507 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:24:41.942149 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:24:41.949409 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:24:41.951121 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Jul 6 23:24:41.951251 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Jul 6 23:24:41.951403 systemd-udevd[1678]: Using default interface naming scheme 'v255'. Jul 6 23:24:41.971411 systemd-tmpfiles[1677]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:24:41.971432 systemd-tmpfiles[1677]: Skipping /boot Jul 6 23:24:42.010714 systemd-tmpfiles[1677]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:24:42.010873 systemd-tmpfiles[1677]: Skipping /boot Jul 6 23:24:42.128668 zram_generator::config[1716]: No configuration found. Jul 6 23:24:42.392989 (udev-worker)[1704]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:24:42.451197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:24:42.707599 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:24:42.708583 systemd[1]: Reloading finished in 809 ms. Jul 6 23:24:42.723886 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:24:42.728142 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:24:42.787567 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:24:42.792787 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:24:42.799932 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:24:42.808006 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:24:42.817959 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:24:42.825010 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:24:42.838170 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:24:42.844195 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:24:42.851185 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:24:42.863885 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:24:42.866347 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:24:42.866927 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:24:42.875124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:24:42.875483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:24:42.875716 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:24:42.884967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:24:42.895832 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:24:42.898421 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:24:42.899780 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:24:42.900178 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:24:42.911885 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:24:42.923248 systemd[1]: Finished ensure-sysext.service. Jul 6 23:24:42.942404 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:24:42.942854 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:24:42.973247 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:24:43.027839 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:24:43.057908 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:24:43.075467 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:24:43.076510 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:24:43.081416 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:24:43.082479 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:24:43.085644 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:24:43.086775 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:24:43.092069 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:24:43.092229 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:24:43.130552 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:24:43.146056 augenrules[1917]: No rules Jul 6 23:24:43.149350 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:24:43.150811 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:24:43.171717 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:24:43.199592 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 6 23:24:43.206419 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:24:43.209105 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:24:43.209396 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:24:43.288790 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:24:43.392760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:24:43.594013 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:24:43.603254 systemd-networkd[1877]: lo: Link UP Jul 6 23:24:43.603274 systemd-networkd[1877]: lo: Gained carrier Jul 6 23:24:43.606030 systemd-networkd[1877]: Enumeration completed Jul 6 23:24:43.606759 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:24:43.609724 systemd-networkd[1877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:24:43.609744 systemd-networkd[1877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:24:43.612917 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:24:43.615634 systemd-networkd[1877]: eth0: Link UP Jul 6 23:24:43.615966 systemd-networkd[1877]: eth0: Gained carrier Jul 6 23:24:43.616001 systemd-networkd[1877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:24:43.621940 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:24:43.638830 systemd-resolved[1878]: Positive Trust Anchors: Jul 6 23:24:43.639280 systemd-resolved[1878]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:24:43.639348 systemd-resolved[1878]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:24:43.640726 systemd-networkd[1877]: eth0: DHCPv4 address 172.31.21.233/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 6 23:24:43.654748 systemd-resolved[1878]: Defaulting to hostname 'linux'. Jul 6 23:24:43.659012 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:24:43.661670 systemd[1]: Reached target network.target - Network. Jul 6 23:24:43.669801 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:24:43.672502 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:24:43.674962 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:24:43.677780 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:24:43.680846 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:24:43.683237 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:24:43.685983 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:24:43.688700 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:24:43.688750 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:24:43.690655 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:24:43.694256 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:24:43.699580 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:24:43.706591 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:24:43.709979 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:24:43.712754 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:24:43.719009 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:24:43.722838 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:24:43.728673 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:24:43.731959 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:24:43.735254 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:24:43.738526 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:24:43.740797 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:24:43.740853 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:24:43.744791 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:24:43.750013 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:24:43.758118 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:24:43.766762 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:24:43.771816 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:24:43.778020 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:24:43.780495 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:24:43.784284 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:24:43.792956 systemd[1]: Started ntpd.service - Network Time Service. Jul 6 23:24:43.808473 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:24:43.820956 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 6 23:24:43.837909 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:24:43.848682 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:24:43.861336 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:24:43.865227 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:24:43.867205 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:24:43.874456 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:24:43.882499 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:24:43.888851 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:24:43.910897 jq[1962]: false Jul 6 23:24:43.913280 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:24:43.925078 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:24:43.974316 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:24:43.974797 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:24:43.984158 extend-filesystems[1963]: Found /dev/nvme0n1p6 Jul 6 23:24:44.005139 tar[1980]: linux-arm64/LICENSE Jul 6 23:24:44.005139 tar[1980]: linux-arm64/helm Jul 6 23:24:44.013440 jq[1976]: true Jul 6 23:24:44.016297 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:24:44.021318 extend-filesystems[1963]: Found /dev/nvme0n1p9 Jul 6 23:24:44.027669 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:24:44.052956 extend-filesystems[1963]: Checking size of /dev/nvme0n1p9 Jul 6 23:24:44.065064 update_engine[1975]: I20250706 23:24:44.064793 1975 main.cc:92] Flatcar Update Engine starting Jul 6 23:24:44.071876 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:16:56 UTC 2025 (1): Starting Jul 6 23:24:44.071876 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 6 23:24:44.071876 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: ---------------------------------------------------- Jul 6 23:24:44.071876 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: ntp-4 is maintained by Network Time Foundation, Jul 6 23:24:44.071876 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 6 23:24:44.071876 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: corporation. Support and training for ntp-4 are Jul 6 23:24:44.071876 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: available at https://www.nwtime.org/support Jul 6 23:24:44.071876 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: ---------------------------------------------------- Jul 6 23:24:44.067531 (ntainerd)[1994]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:24:44.068756 ntpd[1965]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:16:56 UTC 2025 (1): Starting Jul 6 23:24:44.068801 ntpd[1965]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 6 23:24:44.068820 ntpd[1965]: ---------------------------------------------------- Jul 6 23:24:44.068836 ntpd[1965]: ntp-4 is maintained by Network Time Foundation, Jul 6 23:24:44.068854 ntpd[1965]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 6 23:24:44.068870 ntpd[1965]: corporation. Support and training for ntp-4 are Jul 6 23:24:44.068886 ntpd[1965]: available at https://www.nwtime.org/support Jul 6 23:24:44.068902 ntpd[1965]: ---------------------------------------------------- Jul 6 23:24:44.092331 ntpd[1965]: proto: precision = 0.096 usec (-23) Jul 6 23:24:44.093369 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: proto: precision = 0.096 usec (-23) Jul 6 23:24:44.099906 ntpd[1965]: basedate set to 2025-06-24 Jul 6 23:24:44.101827 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: basedate set to 2025-06-24 Jul 6 23:24:44.101827 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: gps base set to 2025-06-29 (week 2373) Jul 6 23:24:44.102007 jq[2004]: true Jul 6 23:24:44.099938 ntpd[1965]: gps base set to 2025-06-29 (week 2373) Jul 6 23:24:44.111482 ntpd[1965]: Listen and drop on 0 v6wildcard [::]:123 Jul 6 23:24:44.111950 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: Listen and drop on 0 v6wildcard [::]:123 Jul 6 23:24:44.111950 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 6 23:24:44.111581 ntpd[1965]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 6 23:24:44.116953 ntpd[1965]: Listen normally on 2 lo 127.0.0.1:123 Jul 6 23:24:44.118085 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: Listen normally on 2 lo 127.0.0.1:123 Jul 6 23:24:44.118085 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: Listen normally on 3 eth0 172.31.21.233:123 Jul 6 23:24:44.118085 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: Listen normally on 4 lo [::1]:123 Jul 6 23:24:44.118085 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: bind(21) AF_INET6 fe80::4c5:b9ff:fe3e:abbd%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:24:44.118085 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: unable to create socket on eth0 (5) for fe80::4c5:b9ff:fe3e:abbd%2#123 Jul 6 23:24:44.118085 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: failed to init interface for address fe80::4c5:b9ff:fe3e:abbd%2 Jul 6 23:24:44.118085 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: Listening on routing socket on fd #21 for interface updates Jul 6 23:24:44.117036 ntpd[1965]: Listen normally on 3 eth0 172.31.21.233:123 Jul 6 23:24:44.117100 ntpd[1965]: Listen normally on 4 lo [::1]:123 Jul 6 23:24:44.117178 ntpd[1965]: bind(21) AF_INET6 fe80::4c5:b9ff:fe3e:abbd%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:24:44.117214 ntpd[1965]: unable to create socket on eth0 (5) for fe80::4c5:b9ff:fe3e:abbd%2#123 Jul 6 23:24:44.117239 ntpd[1965]: failed to init interface for address fe80::4c5:b9ff:fe3e:abbd%2 Jul 6 23:24:44.117288 ntpd[1965]: Listening on routing socket on fd #21 for interface updates Jul 6 23:24:44.130140 dbus-daemon[1960]: [system] SELinux support is enabled Jul 6 23:24:44.130451 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:24:44.139570 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:24:44.139656 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:24:44.142793 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:24:44.142828 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:24:44.183384 extend-filesystems[1963]: Resized partition /dev/nvme0n1p9 Jul 6 23:24:44.190541 dbus-daemon[1960]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1877 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 6 23:24:44.194685 ntpd[1965]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:24:44.195573 coreos-metadata[1959]: Jul 06 23:24:44.195 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 6 23:24:44.195970 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:24:44.195970 ntpd[1965]: 6 Jul 23:24:44 ntpd[1965]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:24:44.194744 ntpd[1965]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:24:44.196110 extend-filesystems[2017]: resize2fs 1.47.2 (1-Jan-2025) Jul 6 23:24:44.200979 coreos-metadata[1959]: Jul 06 23:24:44.200 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 6 23:24:44.217958 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 6 23:24:44.206175 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:24:44.218212 update_engine[1975]: I20250706 23:24:44.208431 1975 update_check_scheduler.cc:74] Next update check in 6m20s Jul 6 23:24:44.218267 coreos-metadata[1959]: Jul 06 23:24:44.205 INFO Fetch successful Jul 6 23:24:44.218267 coreos-metadata[1959]: Jul 06 23:24:44.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 6 23:24:44.227314 coreos-metadata[1959]: Jul 06 23:24:44.219 INFO Fetch successful Jul 6 23:24:44.227314 coreos-metadata[1959]: Jul 06 23:24:44.220 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 6 23:24:44.227314 coreos-metadata[1959]: Jul 06 23:24:44.226 INFO Fetch successful Jul 6 23:24:44.227314 coreos-metadata[1959]: Jul 06 23:24:44.227 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 6 23:24:44.221192 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 6 23:24:44.232093 coreos-metadata[1959]: Jul 06 23:24:44.230 INFO Fetch successful Jul 6 23:24:44.232093 coreos-metadata[1959]: Jul 06 23:24:44.231 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 6 23:24:44.244532 coreos-metadata[1959]: Jul 06 23:24:44.241 INFO Fetch failed with 404: resource not found Jul 6 23:24:44.244532 coreos-metadata[1959]: Jul 06 23:24:44.241 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 6 23:24:44.245028 coreos-metadata[1959]: Jul 06 23:24:44.244 INFO Fetch successful Jul 6 23:24:44.248672 coreos-metadata[1959]: Jul 06 23:24:44.245 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 6 23:24:44.249354 coreos-metadata[1959]: Jul 06 23:24:44.249 INFO Fetch successful Jul 6 23:24:44.249354 coreos-metadata[1959]: Jul 06 23:24:44.249 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 6 23:24:44.257764 coreos-metadata[1959]: Jul 06 23:24:44.257 INFO Fetch successful Jul 6 23:24:44.257764 coreos-metadata[1959]: Jul 06 23:24:44.257 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 6 23:24:44.257764 coreos-metadata[1959]: Jul 06 23:24:44.257 INFO Fetch successful Jul 6 23:24:44.257764 coreos-metadata[1959]: Jul 06 23:24:44.257 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 6 23:24:44.258116 coreos-metadata[1959]: Jul 06 23:24:44.258 INFO Fetch successful Jul 6 23:24:44.263888 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:24:44.267439 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 6 23:24:44.387633 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 6 23:24:44.389465 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:24:44.392467 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:24:44.413713 extend-filesystems[2017]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 6 23:24:44.413713 extend-filesystems[2017]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:24:44.413713 extend-filesystems[2017]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 6 23:24:44.423611 extend-filesystems[1963]: Resized filesystem in /dev/nvme0n1p9 Jul 6 23:24:44.420635 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:24:44.427720 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:24:44.436327 bash[2039]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:24:44.443094 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:24:44.452876 systemd[1]: Starting sshkeys.service... Jul 6 23:24:44.455325 systemd-logind[1972]: Watching system buttons on /dev/input/event0 (Power Button) Jul 6 23:24:44.455360 systemd-logind[1972]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 6 23:24:44.465977 systemd-logind[1972]: New seat seat0. Jul 6 23:24:44.474368 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:24:44.564376 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:24:44.572459 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:24:44.821028 coreos-metadata[2055]: Jul 06 23:24:44.817 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 6 23:24:44.821028 coreos-metadata[2055]: Jul 06 23:24:44.820 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 6 23:24:44.824736 coreos-metadata[2055]: Jul 06 23:24:44.824 INFO Fetch successful Jul 6 23:24:44.824736 coreos-metadata[2055]: Jul 06 23:24:44.824 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 6 23:24:44.828692 coreos-metadata[2055]: Jul 06 23:24:44.826 INFO Fetch successful Jul 6 23:24:44.833941 unknown[2055]: wrote ssh authorized keys file for user: core Jul 6 23:24:44.867468 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 6 23:24:44.882364 dbus-daemon[1960]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 6 23:24:44.883601 dbus-daemon[1960]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2018 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 6 23:24:44.898877 systemd[1]: Starting polkit.service - Authorization Manager... Jul 6 23:24:45.010648 update-ssh-keys[2077]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:24:45.013819 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:24:45.026453 systemd[1]: Finished sshkeys.service. Jul 6 23:24:45.044673 containerd[1994]: time="2025-07-06T23:24:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 6 23:24:45.055536 containerd[1994]: time="2025-07-06T23:24:45.055391396Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 6 23:24:45.080057 ntpd[1965]: bind(24) AF_INET6 fe80::4c5:b9ff:fe3e:abbd%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:24:45.081051 ntpd[1965]: 6 Jul 23:24:45 ntpd[1965]: bind(24) AF_INET6 fe80::4c5:b9ff:fe3e:abbd%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:24:45.081051 ntpd[1965]: 6 Jul 23:24:45 ntpd[1965]: unable to create socket on eth0 (6) for fe80::4c5:b9ff:fe3e:abbd%2#123 Jul 6 23:24:45.081051 ntpd[1965]: 6 Jul 23:24:45 ntpd[1965]: failed to init interface for address fe80::4c5:b9ff:fe3e:abbd%2 Jul 6 23:24:45.080114 ntpd[1965]: unable to create socket on eth0 (6) for fe80::4c5:b9ff:fe3e:abbd%2#123 Jul 6 23:24:45.080139 ntpd[1965]: failed to init interface for address fe80::4c5:b9ff:fe3e:abbd%2 Jul 6 23:24:45.127077 containerd[1994]: time="2025-07-06T23:24:45.126269060Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.004µs" Jul 6 23:24:45.127077 containerd[1994]: time="2025-07-06T23:24:45.126335816Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 6 23:24:45.127077 containerd[1994]: time="2025-07-06T23:24:45.126377168Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 6 23:24:45.127077 containerd[1994]: time="2025-07-06T23:24:45.126687656Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 6 23:24:45.127077 containerd[1994]: time="2025-07-06T23:24:45.126724340Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 6 23:24:45.127077 containerd[1994]: time="2025-07-06T23:24:45.126776996Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:24:45.127077 containerd[1994]: time="2025-07-06T23:24:45.126920060Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:24:45.127077 containerd[1994]: time="2025-07-06T23:24:45.126946400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:24:45.127475 containerd[1994]: time="2025-07-06T23:24:45.127338980Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:24:45.127475 containerd[1994]: time="2025-07-06T23:24:45.127373144Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:24:45.127475 containerd[1994]: time="2025-07-06T23:24:45.127422392Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:24:45.127475 containerd[1994]: time="2025-07-06T23:24:45.127445696Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 6 23:24:45.131758 containerd[1994]: time="2025-07-06T23:24:45.127608800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 6 23:24:45.137654 containerd[1994]: time="2025-07-06T23:24:45.134212700Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:24:45.137654 containerd[1994]: time="2025-07-06T23:24:45.135848264Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:24:45.137816 containerd[1994]: time="2025-07-06T23:24:45.137660348Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 6 23:24:45.137867 containerd[1994]: time="2025-07-06T23:24:45.137785760Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 6 23:24:45.139019 containerd[1994]: time="2025-07-06T23:24:45.138903956Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 6 23:24:45.140881 containerd[1994]: time="2025-07-06T23:24:45.140753036Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:24:45.152479 containerd[1994]: time="2025-07-06T23:24:45.152231828Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 6 23:24:45.152479 containerd[1994]: time="2025-07-06T23:24:45.152346224Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 6 23:24:45.152479 containerd[1994]: time="2025-07-06T23:24:45.152381984Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 6 23:24:45.152479 containerd[1994]: time="2025-07-06T23:24:45.152411240Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 6 23:24:45.152479 containerd[1994]: time="2025-07-06T23:24:45.152443676Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 6 23:24:45.152479 containerd[1994]: time="2025-07-06T23:24:45.152480576Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 6 23:24:45.152888 containerd[1994]: time="2025-07-06T23:24:45.152510024Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 6 23:24:45.152888 containerd[1994]: time="2025-07-06T23:24:45.152540336Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 6 23:24:45.152888 containerd[1994]: time="2025-07-06T23:24:45.152569304Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 6 23:24:45.152888 containerd[1994]: time="2025-07-06T23:24:45.152594792Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 6 23:24:45.152888 containerd[1994]: time="2025-07-06T23:24:45.152635112Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 6 23:24:45.152888 containerd[1994]: time="2025-07-06T23:24:45.152669060Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 6 23:24:45.153118 containerd[1994]: time="2025-07-06T23:24:45.152894168Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 6 23:24:45.153118 containerd[1994]: time="2025-07-06T23:24:45.152932388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 6 23:24:45.153118 containerd[1994]: time="2025-07-06T23:24:45.152976068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 6 23:24:45.153118 containerd[1994]: time="2025-07-06T23:24:45.153003656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 6 23:24:45.153118 containerd[1994]: time="2025-07-06T23:24:45.153028796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 6 23:24:45.153118 containerd[1994]: time="2025-07-06T23:24:45.153053936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 6 23:24:45.153118 containerd[1994]: time="2025-07-06T23:24:45.153086564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 6 23:24:45.153118 containerd[1994]: time="2025-07-06T23:24:45.153112256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 6 23:24:45.153427 containerd[1994]: time="2025-07-06T23:24:45.153138548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 6 23:24:45.153427 containerd[1994]: time="2025-07-06T23:24:45.153173324Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 6 23:24:45.153427 containerd[1994]: time="2025-07-06T23:24:45.153199748Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 6 23:24:45.157501 containerd[1994]: time="2025-07-06T23:24:45.153843620Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 6 23:24:45.157501 containerd[1994]: time="2025-07-06T23:24:45.153901280Z" level=info msg="Start snapshots syncer" Jul 6 23:24:45.157501 containerd[1994]: time="2025-07-06T23:24:45.153948152Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.154610672Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.155016416Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.155459624Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.155768300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.155822912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.155861672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.155891480Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.155930516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.155967344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.156015224Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.156081764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.156121580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.156153476Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.156225812Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.156268304Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.156300620Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.156335036Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.156375356Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.156412496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 6 23:24:45.157891 containerd[1994]: time="2025-07-06T23:24:45.156448700Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 6 23:24:45.167008 containerd[1994]: time="2025-07-06T23:24:45.166730756Z" level=info msg="runtime interface created" Jul 6 23:24:45.167008 containerd[1994]: time="2025-07-06T23:24:45.166786856Z" level=info msg="created NRI interface" Jul 6 23:24:45.167008 containerd[1994]: time="2025-07-06T23:24:45.166825508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 6 23:24:45.167008 containerd[1994]: time="2025-07-06T23:24:45.166874948Z" level=info msg="Connect containerd service" Jul 6 23:24:45.167262 containerd[1994]: time="2025-07-06T23:24:45.167021468Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:24:45.171645 containerd[1994]: time="2025-07-06T23:24:45.170814656Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:24:45.464822 locksmithd[2020]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:24:45.562799 containerd[1994]: time="2025-07-06T23:24:45.562253278Z" level=info msg="Start subscribing containerd event" Jul 6 23:24:45.562799 containerd[1994]: time="2025-07-06T23:24:45.562388398Z" level=info msg="Start recovering state" Jul 6 23:24:45.562799 containerd[1994]: time="2025-07-06T23:24:45.562534198Z" level=info msg="Start event monitor" Jul 6 23:24:45.562799 containerd[1994]: time="2025-07-06T23:24:45.562560874Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:24:45.562799 containerd[1994]: time="2025-07-06T23:24:45.562579894Z" level=info msg="Start streaming server" Jul 6 23:24:45.562799 containerd[1994]: time="2025-07-06T23:24:45.562603030Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 6 23:24:45.562799 containerd[1994]: time="2025-07-06T23:24:45.562651930Z" level=info msg="runtime interface starting up..." Jul 6 23:24:45.562799 containerd[1994]: time="2025-07-06T23:24:45.562669378Z" level=info msg="starting plugins..." Jul 6 23:24:45.562799 containerd[1994]: time="2025-07-06T23:24:45.562698118Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 6 23:24:45.565417 containerd[1994]: time="2025-07-06T23:24:45.563733946Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:24:45.565417 containerd[1994]: time="2025-07-06T23:24:45.563839954Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:24:45.565417 containerd[1994]: time="2025-07-06T23:24:45.563924974Z" level=info msg="containerd successfully booted in 0.520382s" Jul 6 23:24:45.572831 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:24:45.579524 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:24:45.604159 polkitd[2087]: Started polkitd version 126 Jul 6 23:24:45.632149 polkitd[2087]: Loading rules from directory /etc/polkit-1/rules.d Jul 6 23:24:45.632794 polkitd[2087]: Loading rules from directory /run/polkit-1/rules.d Jul 6 23:24:45.632882 polkitd[2087]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 6 23:24:45.633494 polkitd[2087]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 6 23:24:45.633540 polkitd[2087]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 6 23:24:45.641697 polkitd[2087]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 6 23:24:45.643532 polkitd[2087]: Finished loading, compiling and executing 2 rules Jul 6 23:24:45.644793 systemd[1]: Started polkit.service - Authorization Manager. Jul 6 23:24:45.653040 dbus-daemon[1960]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 6 23:24:45.654075 polkitd[2087]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 6 23:24:45.660818 systemd-networkd[1877]: eth0: Gained IPv6LL Jul 6 23:24:45.668127 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:24:45.672466 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:24:45.681935 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 6 23:24:45.692107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:24:45.701216 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:24:45.747533 systemd-hostnamed[2018]: Hostname set to (transient) Jul 6 23:24:45.747962 systemd-resolved[1878]: System hostname changed to 'ip-172-31-21-233'. Jul 6 23:24:45.829732 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:24:45.931654 amazon-ssm-agent[2182]: Initializing new seelog logger Jul 6 23:24:45.931654 amazon-ssm-agent[2182]: New Seelog Logger Creation Complete Jul 6 23:24:45.931654 amazon-ssm-agent[2182]: 2025/07/06 23:24:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:24:45.931654 amazon-ssm-agent[2182]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:24:45.931654 amazon-ssm-agent[2182]: 2025/07/06 23:24:45 processing appconfig overrides Jul 6 23:24:45.935672 amazon-ssm-agent[2182]: 2025/07/06 23:24:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:24:45.935672 amazon-ssm-agent[2182]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:24:45.935672 amazon-ssm-agent[2182]: 2025/07/06 23:24:45 processing appconfig overrides Jul 6 23:24:45.935672 amazon-ssm-agent[2182]: 2025/07/06 23:24:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:24:45.935672 amazon-ssm-agent[2182]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:24:45.935672 amazon-ssm-agent[2182]: 2025/07/06 23:24:45 processing appconfig overrides Jul 6 23:24:45.938316 amazon-ssm-agent[2182]: 2025-07-06 23:24:45.9346 INFO Proxy environment variables: Jul 6 23:24:45.944644 amazon-ssm-agent[2182]: 2025/07/06 23:24:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:24:45.944644 amazon-ssm-agent[2182]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:24:45.944644 amazon-ssm-agent[2182]: 2025/07/06 23:24:45 processing appconfig overrides Jul 6 23:24:46.049481 amazon-ssm-agent[2182]: 2025-07-06 23:24:45.9346 INFO http_proxy: Jul 6 23:24:46.148497 amazon-ssm-agent[2182]: 2025-07-06 23:24:45.9346 INFO no_proxy: Jul 6 23:24:46.246712 amazon-ssm-agent[2182]: 2025-07-06 23:24:45.9346 INFO https_proxy: Jul 6 23:24:46.311679 tar[1980]: linux-arm64/README.md Jul 6 23:24:46.344898 amazon-ssm-agent[2182]: 2025-07-06 23:24:45.9348 INFO Checking if agent identity type OnPrem can be assumed Jul 6 23:24:46.354697 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:24:46.444176 amazon-ssm-agent[2182]: 2025-07-06 23:24:45.9349 INFO Checking if agent identity type EC2 can be assumed Jul 6 23:24:46.544936 amazon-ssm-agent[2182]: 2025-07-06 23:24:46.0627 INFO Agent will take identity from EC2 Jul 6 23:24:46.585112 sshd_keygen[2008]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:24:46.644239 amazon-ssm-agent[2182]: 2025-07-06 23:24:46.0681 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 6 23:24:46.649715 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:24:46.660079 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:24:46.665323 systemd[1]: Started sshd@0-172.31.21.233:22-147.75.109.163:35618.service - OpenSSH per-connection server daemon (147.75.109.163:35618). Jul 6 23:24:46.716289 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:24:46.716769 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:24:46.724106 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:24:46.743659 amazon-ssm-agent[2182]: 2025-07-06 23:24:46.0682 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 6 23:24:46.767892 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:24:46.777217 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:24:46.781936 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:24:46.786651 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:24:46.843768 amazon-ssm-agent[2182]: 2025-07-06 23:24:46.0682 INFO [amazon-ssm-agent] Starting Core Agent Jul 6 23:24:46.944197 amazon-ssm-agent[2182]: 2025-07-06 23:24:46.0682 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 6 23:24:46.970666 sshd[2215]: Accepted publickey for core from 147.75.109.163 port 35618 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:24:46.975827 sshd-session[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:46.996244 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:24:47.001847 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:24:47.031915 systemd-logind[1972]: New session 1 of user core. Jul 6 23:24:47.048659 amazon-ssm-agent[2182]: 2025-07-06 23:24:46.0682 INFO [Registrar] Starting registrar module Jul 6 23:24:47.053935 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:24:47.070912 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:24:47.090354 (systemd)[2226]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:24:47.098322 systemd-logind[1972]: New session c1 of user core. Jul 6 23:24:47.147282 amazon-ssm-agent[2182]: 2025-07-06 23:24:46.0701 INFO [EC2Identity] Checking disk for registration info Jul 6 23:24:47.249639 amazon-ssm-agent[2182]: 2025-07-06 23:24:46.0701 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 6 23:24:47.349319 amazon-ssm-agent[2182]: 2025-07-06 23:24:46.0702 INFO [EC2Identity] Generating registration keypair Jul 6 23:24:47.446981 amazon-ssm-agent[2182]: 2025/07/06 23:24:47 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:24:47.447155 amazon-ssm-agent[2182]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:24:47.447541 amazon-ssm-agent[2182]: 2025/07/06 23:24:47 processing appconfig overrides Jul 6 23:24:47.450080 amazon-ssm-agent[2182]: 2025-07-06 23:24:47.3965 INFO [EC2Identity] Checking write access before registering Jul 6 23:24:47.487268 amazon-ssm-agent[2182]: 2025-07-06 23:24:47.3977 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 6 23:24:47.489671 amazon-ssm-agent[2182]: 2025-07-06 23:24:47.4467 INFO [EC2Identity] EC2 registration was successful. Jul 6 23:24:47.489671 amazon-ssm-agent[2182]: 2025-07-06 23:24:47.4467 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 6 23:24:47.489671 amazon-ssm-agent[2182]: 2025-07-06 23:24:47.4468 INFO [CredentialRefresher] credentialRefresher has started Jul 6 23:24:47.489671 amazon-ssm-agent[2182]: 2025-07-06 23:24:47.4468 INFO [CredentialRefresher] Starting credentials refresher loop Jul 6 23:24:47.491530 amazon-ssm-agent[2182]: 2025-07-06 23:24:47.4866 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 6 23:24:47.491530 amazon-ssm-agent[2182]: 2025-07-06 23:24:47.4870 INFO [CredentialRefresher] Credentials ready Jul 6 23:24:47.496459 systemd[2226]: Queued start job for default target default.target. Jul 6 23:24:47.513958 systemd[2226]: Created slice app.slice - User Application Slice. Jul 6 23:24:47.514027 systemd[2226]: Reached target paths.target - Paths. Jul 6 23:24:47.514116 systemd[2226]: Reached target timers.target - Timers. Jul 6 23:24:47.518802 systemd[2226]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:24:47.544741 systemd[2226]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:24:47.544987 systemd[2226]: Reached target sockets.target - Sockets. Jul 6 23:24:47.545089 systemd[2226]: Reached target basic.target - Basic System. Jul 6 23:24:47.545175 systemd[2226]: Reached target default.target - Main User Target. Jul 6 23:24:47.545235 systemd[2226]: Startup finished in 432ms. Jul 6 23:24:47.545596 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:24:47.551115 amazon-ssm-agent[2182]: 2025-07-06 23:24:47.4912 INFO [CredentialRefresher] Next credential rotation will be in 29.9999245498 minutes Jul 6 23:24:47.563948 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:24:47.625499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:24:47.629761 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:24:47.633738 systemd[1]: Startup finished in 3.746s (kernel) + 7.965s (initrd) + 9.043s (userspace) = 20.756s. Jul 6 23:24:47.651294 (kubelet)[2240]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:24:47.730340 systemd[1]: Started sshd@1-172.31.21.233:22-147.75.109.163:44968.service - OpenSSH per-connection server daemon (147.75.109.163:44968). Jul 6 23:24:47.928068 sshd[2247]: Accepted publickey for core from 147.75.109.163 port 44968 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:24:47.930774 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:47.939826 systemd-logind[1972]: New session 2 of user core. Jul 6 23:24:47.951939 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:24:48.080070 ntpd[1965]: Listen normally on 7 eth0 [fe80::4c5:b9ff:fe3e:abbd%2]:123 Jul 6 23:24:48.103196 ntpd[1965]: 6 Jul 23:24:48 ntpd[1965]: Listen normally on 7 eth0 [fe80::4c5:b9ff:fe3e:abbd%2]:123 Jul 6 23:24:48.137312 sshd[2253]: Connection closed by 147.75.109.163 port 44968 Jul 6 23:24:48.138886 sshd-session[2247]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:48.146811 systemd[1]: sshd@1-172.31.21.233:22-147.75.109.163:44968.service: Deactivated successfully. Jul 6 23:24:48.150675 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:24:48.153246 systemd-logind[1972]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:24:48.157121 systemd-logind[1972]: Removed session 2. Jul 6 23:24:48.177026 systemd[1]: Started sshd@2-172.31.21.233:22-147.75.109.163:44980.service - OpenSSH per-connection server daemon (147.75.109.163:44980). Jul 6 23:24:48.376329 sshd[2259]: Accepted publickey for core from 147.75.109.163 port 44980 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:24:48.378331 sshd-session[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:48.389262 systemd-logind[1972]: New session 3 of user core. Jul 6 23:24:48.393902 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:24:48.515643 sshd[2262]: Connection closed by 147.75.109.163 port 44980 Jul 6 23:24:48.516930 sshd-session[2259]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:48.521612 amazon-ssm-agent[2182]: 2025-07-06 23:24:48.5212 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 6 23:24:48.528600 systemd[1]: sshd@2-172.31.21.233:22-147.75.109.163:44980.service: Deactivated successfully. Jul 6 23:24:48.533741 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:24:48.540164 systemd-logind[1972]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:24:48.561575 systemd[1]: Started sshd@3-172.31.21.233:22-147.75.109.163:44992.service - OpenSSH per-connection server daemon (147.75.109.163:44992). Jul 6 23:24:48.565696 systemd-logind[1972]: Removed session 3. Jul 6 23:24:48.623800 amazon-ssm-agent[2182]: 2025-07-06 23:24:48.5437 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2267) started Jul 6 23:24:48.706814 kubelet[2240]: E0706 23:24:48.706292 2240 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:24:48.713964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:24:48.714274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:24:48.715482 systemd[1]: kubelet.service: Consumed 1.462s CPU time, 256.4M memory peak. Jul 6 23:24:48.724076 amazon-ssm-agent[2182]: 2025-07-06 23:24:48.5437 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 6 23:24:48.794383 sshd[2272]: Accepted publickey for core from 147.75.109.163 port 44992 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:24:48.797370 sshd-session[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:48.815145 systemd-logind[1972]: New session 4 of user core. Jul 6 23:24:48.819940 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:24:48.945663 sshd[2285]: Connection closed by 147.75.109.163 port 44992 Jul 6 23:24:48.946475 sshd-session[2272]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:48.952507 systemd[1]: sshd@3-172.31.21.233:22-147.75.109.163:44992.service: Deactivated successfully. Jul 6 23:24:48.955550 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:24:48.957362 systemd-logind[1972]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:24:48.960636 systemd-logind[1972]: Removed session 4. Jul 6 23:24:48.982038 systemd[1]: Started sshd@4-172.31.21.233:22-147.75.109.163:45008.service - OpenSSH per-connection server daemon (147.75.109.163:45008). Jul 6 23:24:49.174666 sshd[2291]: Accepted publickey for core from 147.75.109.163 port 45008 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:24:49.177135 sshd-session[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:49.186610 systemd-logind[1972]: New session 5 of user core. Jul 6 23:24:49.190886 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:24:49.307960 sudo[2294]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:24:49.308598 sudo[2294]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:24:49.332774 sudo[2294]: pam_unix(sudo:session): session closed for user root Jul 6 23:24:49.356009 sshd[2293]: Connection closed by 147.75.109.163 port 45008 Jul 6 23:24:49.357044 sshd-session[2291]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:49.364800 systemd[1]: sshd@4-172.31.21.233:22-147.75.109.163:45008.service: Deactivated successfully. Jul 6 23:24:49.369513 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:24:49.371362 systemd-logind[1972]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:24:49.374825 systemd-logind[1972]: Removed session 5. Jul 6 23:24:49.392723 systemd[1]: Started sshd@5-172.31.21.233:22-147.75.109.163:45018.service - OpenSSH per-connection server daemon (147.75.109.163:45018). Jul 6 23:24:49.593560 sshd[2301]: Accepted publickey for core from 147.75.109.163 port 45018 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:24:49.596242 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:49.605717 systemd-logind[1972]: New session 6 of user core. Jul 6 23:24:49.608880 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:24:49.713320 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:24:49.714508 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:24:49.725949 sudo[2305]: pam_unix(sudo:session): session closed for user root Jul 6 23:24:49.735389 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:24:49.736456 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:24:49.754172 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:24:49.821201 augenrules[2327]: No rules Jul 6 23:24:49.823889 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:24:49.825709 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:24:49.827923 sudo[2304]: pam_unix(sudo:session): session closed for user root Jul 6 23:24:49.851456 sshd[2303]: Connection closed by 147.75.109.163 port 45018 Jul 6 23:24:49.852474 sshd-session[2301]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:49.858596 systemd[1]: sshd@5-172.31.21.233:22-147.75.109.163:45018.service: Deactivated successfully. Jul 6 23:24:49.859362 systemd-logind[1972]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:24:49.862555 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:24:49.868126 systemd-logind[1972]: Removed session 6. Jul 6 23:24:49.888973 systemd[1]: Started sshd@6-172.31.21.233:22-147.75.109.163:45020.service - OpenSSH per-connection server daemon (147.75.109.163:45020). Jul 6 23:24:50.090541 sshd[2336]: Accepted publickey for core from 147.75.109.163 port 45020 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:24:50.092974 sshd-session[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:50.100915 systemd-logind[1972]: New session 7 of user core. Jul 6 23:24:50.109869 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:24:50.213894 sudo[2339]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:24:50.214495 sudo[2339]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:24:50.795475 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:24:50.825088 (dockerd)[2356]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:24:51.404691 systemd-resolved[1878]: Clock change detected. Flushing caches. Jul 6 23:24:51.552138 dockerd[2356]: time="2025-07-06T23:24:51.551860948Z" level=info msg="Starting up" Jul 6 23:24:51.556494 dockerd[2356]: time="2025-07-06T23:24:51.555908200Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 6 23:24:51.620095 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport759503111-merged.mount: Deactivated successfully. Jul 6 23:24:51.743312 dockerd[2356]: time="2025-07-06T23:24:51.743241749Z" level=info msg="Loading containers: start." Jul 6 23:24:51.757632 kernel: Initializing XFRM netlink socket Jul 6 23:24:52.058478 (udev-worker)[2381]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:24:52.133989 systemd-networkd[1877]: docker0: Link UP Jul 6 23:24:52.139061 dockerd[2356]: time="2025-07-06T23:24:52.138891111Z" level=info msg="Loading containers: done." Jul 6 23:24:52.162999 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1204419955-merged.mount: Deactivated successfully. Jul 6 23:24:52.169318 dockerd[2356]: time="2025-07-06T23:24:52.169251003Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:24:52.169550 dockerd[2356]: time="2025-07-06T23:24:52.169377627Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 6 23:24:52.169654 dockerd[2356]: time="2025-07-06T23:24:52.169601451Z" level=info msg="Initializing buildkit" Jul 6 23:24:52.205483 dockerd[2356]: time="2025-07-06T23:24:52.205351983Z" level=info msg="Completed buildkit initialization" Jul 6 23:24:52.222002 dockerd[2356]: time="2025-07-06T23:24:52.221902156Z" level=info msg="Daemon has completed initialization" Jul 6 23:24:52.222163 dockerd[2356]: time="2025-07-06T23:24:52.222016300Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:24:52.222414 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:24:53.266242 containerd[1994]: time="2025-07-06T23:24:53.266188061Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 6 23:24:53.871991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3661570759.mount: Deactivated successfully. Jul 6 23:24:55.287607 containerd[1994]: time="2025-07-06T23:24:55.287486947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:55.290742 containerd[1994]: time="2025-07-06T23:24:55.290671819Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jul 6 23:24:55.292855 containerd[1994]: time="2025-07-06T23:24:55.292766059Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:55.298419 containerd[1994]: time="2025-07-06T23:24:55.298318231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:55.300605 containerd[1994]: time="2025-07-06T23:24:55.300476311Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 2.034230074s" Jul 6 23:24:55.300605 containerd[1994]: time="2025-07-06T23:24:55.300531235Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 6 23:24:55.303384 containerd[1994]: time="2025-07-06T23:24:55.303082687Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 6 23:24:56.662248 containerd[1994]: time="2025-07-06T23:24:56.661788010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:56.663649 containerd[1994]: time="2025-07-06T23:24:56.663594310Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jul 6 23:24:56.665688 containerd[1994]: time="2025-07-06T23:24:56.665563318Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:56.670614 containerd[1994]: time="2025-07-06T23:24:56.670369522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:56.672345 containerd[1994]: time="2025-07-06T23:24:56.672139414Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.369003399s" Jul 6 23:24:56.672345 containerd[1994]: time="2025-07-06T23:24:56.672195106Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 6 23:24:56.673038 containerd[1994]: time="2025-07-06T23:24:56.672988918Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 6 23:24:57.880602 containerd[1994]: time="2025-07-06T23:24:57.880250472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:57.882987 containerd[1994]: time="2025-07-06T23:24:57.882888756Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jul 6 23:24:57.884386 containerd[1994]: time="2025-07-06T23:24:57.884329812Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:57.889085 containerd[1994]: time="2025-07-06T23:24:57.888984996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:57.891372 containerd[1994]: time="2025-07-06T23:24:57.890900760Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.217855982s" Jul 6 23:24:57.891372 containerd[1994]: time="2025-07-06T23:24:57.890955096Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 6 23:24:57.891817 containerd[1994]: time="2025-07-06T23:24:57.891783048Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 6 23:24:59.124853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2903489534.mount: Deactivated successfully. Jul 6 23:24:59.128339 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:24:59.132882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:24:59.498824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:24:59.515125 (kubelet)[2642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:24:59.602370 kubelet[2642]: E0706 23:24:59.601683 2642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:24:59.609780 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:24:59.610955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:24:59.611659 systemd[1]: kubelet.service: Consumed 319ms CPU time, 107.1M memory peak. Jul 6 23:24:59.910275 containerd[1994]: time="2025-07-06T23:24:59.910109090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:59.912352 containerd[1994]: time="2025-07-06T23:24:59.912280106Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jul 6 23:24:59.914981 containerd[1994]: time="2025-07-06T23:24:59.914908214Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:59.919323 containerd[1994]: time="2025-07-06T23:24:59.919247666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:59.920608 containerd[1994]: time="2025-07-06T23:24:59.920362034Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 2.028433942s" Jul 6 23:24:59.920608 containerd[1994]: time="2025-07-06T23:24:59.920415734Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 6 23:24:59.921100 containerd[1994]: time="2025-07-06T23:24:59.921045722Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 6 23:25:00.483099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3584460237.mount: Deactivated successfully. Jul 6 23:25:01.709912 containerd[1994]: time="2025-07-06T23:25:01.709832667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:01.711840 containerd[1994]: time="2025-07-06T23:25:01.711771507Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jul 6 23:25:01.714697 containerd[1994]: time="2025-07-06T23:25:01.714627411Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:01.720363 containerd[1994]: time="2025-07-06T23:25:01.720286731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:01.722527 containerd[1994]: time="2025-07-06T23:25:01.722314203Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.801209993s" Jul 6 23:25:01.722527 containerd[1994]: time="2025-07-06T23:25:01.722363907Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 6 23:25:01.723254 containerd[1994]: time="2025-07-06T23:25:01.723219063Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:25:02.224436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3140100465.mount: Deactivated successfully. Jul 6 23:25:02.240372 containerd[1994]: time="2025-07-06T23:25:02.239397565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:25:02.241405 containerd[1994]: time="2025-07-06T23:25:02.241339537Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 6 23:25:02.243945 containerd[1994]: time="2025-07-06T23:25:02.243875641Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:25:02.248279 containerd[1994]: time="2025-07-06T23:25:02.248178229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:25:02.249871 containerd[1994]: time="2025-07-06T23:25:02.249677965Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 526.305038ms" Jul 6 23:25:02.249871 containerd[1994]: time="2025-07-06T23:25:02.249729997Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:25:02.250599 containerd[1994]: time="2025-07-06T23:25:02.250461961Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 6 23:25:02.783265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559203313.mount: Deactivated successfully. Jul 6 23:25:04.779068 containerd[1994]: time="2025-07-06T23:25:04.779000706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:04.782186 containerd[1994]: time="2025-07-06T23:25:04.782132694Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jul 6 23:25:04.784386 containerd[1994]: time="2025-07-06T23:25:04.784334922Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:04.790496 containerd[1994]: time="2025-07-06T23:25:04.790399326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:04.792877 containerd[1994]: time="2025-07-06T23:25:04.792431118Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.541720169s" Jul 6 23:25:04.792877 containerd[1994]: time="2025-07-06T23:25:04.792481830Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 6 23:25:09.712560 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:25:09.716888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:25:10.050840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:25:10.065455 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:25:10.136265 kubelet[2789]: E0706 23:25:10.136196 2789 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:25:10.141293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:25:10.141867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:25:10.142829 systemd[1]: kubelet.service: Consumed 278ms CPU time, 104.8M memory peak. Jul 6 23:25:11.348934 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:25:11.350163 systemd[1]: kubelet.service: Consumed 278ms CPU time, 104.8M memory peak. Jul 6 23:25:11.356977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:25:11.400884 systemd[1]: Reload requested from client PID 2803 ('systemctl') (unit session-7.scope)... Jul 6 23:25:11.400917 systemd[1]: Reloading... Jul 6 23:25:11.668633 zram_generator::config[2854]: No configuration found. Jul 6 23:25:11.861226 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:25:12.120661 systemd[1]: Reloading finished in 719 ms. Jul 6 23:25:12.227514 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:25:12.233528 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:25:12.234075 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:25:12.234160 systemd[1]: kubelet.service: Consumed 227ms CPU time, 95M memory peak. Jul 6 23:25:12.237542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:25:12.554499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:25:12.573109 (kubelet)[2913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:25:12.648039 kubelet[2913]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:25:12.648515 kubelet[2913]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:25:12.648652 kubelet[2913]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:25:12.648887 kubelet[2913]: I0706 23:25:12.648834 2913 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:25:13.615543 kubelet[2913]: I0706 23:25:13.615484 2913 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:25:13.615863 kubelet[2913]: I0706 23:25:13.615833 2913 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:25:13.616351 kubelet[2913]: I0706 23:25:13.616329 2913 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:25:13.675908 kubelet[2913]: E0706 23:25:13.675849 2913 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.21.233:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.233:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:25:13.680136 kubelet[2913]: I0706 23:25:13.680070 2913 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:25:13.695605 kubelet[2913]: I0706 23:25:13.694052 2913 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:25:13.699702 kubelet[2913]: I0706 23:25:13.699669 2913 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:25:13.702244 kubelet[2913]: I0706 23:25:13.702178 2913 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:25:13.702663 kubelet[2913]: I0706 23:25:13.702376 2913 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-233","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:25:13.703034 kubelet[2913]: I0706 23:25:13.703012 2913 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:25:13.703132 kubelet[2913]: I0706 23:25:13.703116 2913 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:25:13.705070 kubelet[2913]: I0706 23:25:13.705045 2913 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:25:13.712927 kubelet[2913]: I0706 23:25:13.712894 2913 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:25:13.713100 kubelet[2913]: I0706 23:25:13.713080 2913 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:25:13.713253 kubelet[2913]: I0706 23:25:13.713233 2913 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:25:13.715666 kubelet[2913]: I0706 23:25:13.715640 2913 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:25:13.722635 kubelet[2913]: E0706 23:25:13.722552 2913 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.21.233:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-233&limit=500&resourceVersion=0\": dial tcp 172.31.21.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:25:13.723380 kubelet[2913]: E0706 23:25:13.723318 2913 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.21.233:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:25:13.723949 kubelet[2913]: I0706 23:25:13.723877 2913 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:25:13.725295 kubelet[2913]: I0706 23:25:13.725087 2913 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:25:13.725434 kubelet[2913]: W0706 23:25:13.725320 2913 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:25:13.738040 kubelet[2913]: I0706 23:25:13.737998 2913 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:25:13.738302 kubelet[2913]: I0706 23:25:13.738283 2913 server.go:1289] "Started kubelet" Jul 6 23:25:13.738533 kubelet[2913]: I0706 23:25:13.738468 2913 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:25:13.741858 kubelet[2913]: I0706 23:25:13.741793 2913 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:25:13.751164 kubelet[2913]: I0706 23:25:13.750918 2913 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:25:13.752687 kubelet[2913]: I0706 23:25:13.751855 2913 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:25:13.752962 kubelet[2913]: I0706 23:25:13.752936 2913 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:25:13.756537 kubelet[2913]: E0706 23:25:13.754200 2913 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.233:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.233:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-233.184fcd1f26275722 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-233,UID:ip-172-31-21-233,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-233,},FirstTimestamp:2025-07-06 23:25:13.738204962 +0000 UTC m=+1.155608766,LastTimestamp:2025-07-06 23:25:13.738204962 +0000 UTC m=+1.155608766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-233,}" Jul 6 23:25:13.756762 kubelet[2913]: I0706 23:25:13.756729 2913 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:25:13.763551 kubelet[2913]: E0706 23:25:13.761980 2913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-233\" not found" Jul 6 23:25:13.763551 kubelet[2913]: I0706 23:25:13.762066 2913 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:25:13.763551 kubelet[2913]: I0706 23:25:13.762595 2913 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:25:13.763551 kubelet[2913]: I0706 23:25:13.762721 2913 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:25:13.763970 kubelet[2913]: E0706 23:25:13.763651 2913 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.21.233:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:25:13.765172 kubelet[2913]: I0706 23:25:13.765122 2913 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:25:13.765317 kubelet[2913]: I0706 23:25:13.765270 2913 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:25:13.767325 kubelet[2913]: E0706 23:25:13.767271 2913 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:25:13.767845 kubelet[2913]: I0706 23:25:13.767752 2913 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:25:13.769629 kubelet[2913]: E0706 23:25:13.769540 2913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-233?timeout=10s\": dial tcp 172.31.21.233:6443: connect: connection refused" interval="200ms" Jul 6 23:25:13.797629 kubelet[2913]: I0706 23:25:13.797442 2913 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:25:13.797629 kubelet[2913]: I0706 23:25:13.797481 2913 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:25:13.797629 kubelet[2913]: I0706 23:25:13.797518 2913 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:25:13.804045 kubelet[2913]: I0706 23:25:13.804012 2913 policy_none.go:49] "None policy: Start" Jul 6 23:25:13.804220 kubelet[2913]: I0706 23:25:13.804102 2913 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:25:13.804632 kubelet[2913]: I0706 23:25:13.804609 2913 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:25:13.809542 kubelet[2913]: I0706 23:25:13.807553 2913 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:25:13.812465 kubelet[2913]: I0706 23:25:13.812428 2913 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:25:13.812680 kubelet[2913]: I0706 23:25:13.812659 2913 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:25:13.813161 kubelet[2913]: I0706 23:25:13.813138 2913 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:25:13.814384 kubelet[2913]: I0706 23:25:13.814193 2913 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:25:13.814384 kubelet[2913]: E0706 23:25:13.814273 2913 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:25:13.815494 kubelet[2913]: E0706 23:25:13.815317 2913 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.21.233:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:25:13.827693 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:25:13.841153 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:25:13.848451 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:25:13.862306 kubelet[2913]: E0706 23:25:13.862257 2913 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:25:13.862601 kubelet[2913]: I0706 23:25:13.862545 2913 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:25:13.863046 kubelet[2913]: I0706 23:25:13.862860 2913 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:25:13.864193 kubelet[2913]: I0706 23:25:13.864114 2913 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:25:13.865742 kubelet[2913]: E0706 23:25:13.865552 2913 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:25:13.865742 kubelet[2913]: E0706 23:25:13.865674 2913 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-233\" not found" Jul 6 23:25:13.939255 systemd[1]: Created slice kubepods-burstable-pod09379c513da07c3314cca58d325fa6e1.slice - libcontainer container kubepods-burstable-pod09379c513da07c3314cca58d325fa6e1.slice. Jul 6 23:25:13.967917 kubelet[2913]: I0706 23:25:13.967880 2913 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-233" Jul 6 23:25:13.970403 kubelet[2913]: E0706 23:25:13.969220 2913 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-233\" not found" node="ip-172-31-21-233" Jul 6 23:25:13.970403 kubelet[2913]: E0706 23:25:13.969696 2913 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.233:6443/api/v1/nodes\": dial tcp 172.31.21.233:6443: connect: connection refused" node="ip-172-31-21-233" Jul 6 23:25:13.971065 kubelet[2913]: E0706 23:25:13.970995 2913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-233?timeout=10s\": dial tcp 172.31.21.233:6443: connect: connection refused" interval="400ms" Jul 6 23:25:13.979282 systemd[1]: Created slice kubepods-burstable-pod8a6faeed988ebde32d92abebdb81b7be.slice - libcontainer container kubepods-burstable-pod8a6faeed988ebde32d92abebdb81b7be.slice. Jul 6 23:25:13.984434 kubelet[2913]: E0706 23:25:13.984385 2913 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-233\" not found" node="ip-172-31-21-233" Jul 6 23:25:13.991520 systemd[1]: Created slice kubepods-burstable-podbce72c48f0cbace0398b1125db2d9665.slice - libcontainer container kubepods-burstable-podbce72c48f0cbace0398b1125db2d9665.slice. Jul 6 23:25:13.995284 kubelet[2913]: E0706 23:25:13.995218 2913 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-233\" not found" node="ip-172-31-21-233" Jul 6 23:25:14.064195 kubelet[2913]: I0706 23:25:14.064155 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a6faeed988ebde32d92abebdb81b7be-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-233\" (UID: \"8a6faeed988ebde32d92abebdb81b7be\") " pod="kube-system/kube-controller-manager-ip-172-31-21-233" Jul 6 23:25:14.064316 kubelet[2913]: I0706 23:25:14.064215 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a6faeed988ebde32d92abebdb81b7be-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-233\" (UID: \"8a6faeed988ebde32d92abebdb81b7be\") " pod="kube-system/kube-controller-manager-ip-172-31-21-233" Jul 6 23:25:14.064316 kubelet[2913]: I0706 23:25:14.064259 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a6faeed988ebde32d92abebdb81b7be-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-233\" (UID: \"8a6faeed988ebde32d92abebdb81b7be\") " pod="kube-system/kube-controller-manager-ip-172-31-21-233" Jul 6 23:25:14.064316 kubelet[2913]: I0706 23:25:14.064294 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a6faeed988ebde32d92abebdb81b7be-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-233\" (UID: \"8a6faeed988ebde32d92abebdb81b7be\") " pod="kube-system/kube-controller-manager-ip-172-31-21-233" Jul 6 23:25:14.064480 kubelet[2913]: I0706 23:25:14.064335 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bce72c48f0cbace0398b1125db2d9665-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-233\" (UID: \"bce72c48f0cbace0398b1125db2d9665\") " pod="kube-system/kube-scheduler-ip-172-31-21-233" Jul 6 23:25:14.064480 kubelet[2913]: I0706 23:25:14.064370 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09379c513da07c3314cca58d325fa6e1-ca-certs\") pod \"kube-apiserver-ip-172-31-21-233\" (UID: \"09379c513da07c3314cca58d325fa6e1\") " pod="kube-system/kube-apiserver-ip-172-31-21-233" Jul 6 23:25:14.064480 kubelet[2913]: I0706 23:25:14.064410 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09379c513da07c3314cca58d325fa6e1-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-233\" (UID: \"09379c513da07c3314cca58d325fa6e1\") " pod="kube-system/kube-apiserver-ip-172-31-21-233" Jul 6 23:25:14.064480 kubelet[2913]: I0706 23:25:14.064444 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09379c513da07c3314cca58d325fa6e1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-233\" (UID: \"09379c513da07c3314cca58d325fa6e1\") " pod="kube-system/kube-apiserver-ip-172-31-21-233" Jul 6 23:25:14.064836 kubelet[2913]: I0706 23:25:14.064477 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a6faeed988ebde32d92abebdb81b7be-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-233\" (UID: \"8a6faeed988ebde32d92abebdb81b7be\") " pod="kube-system/kube-controller-manager-ip-172-31-21-233" Jul 6 23:25:14.173018 kubelet[2913]: I0706 23:25:14.172444 2913 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-233" Jul 6 23:25:14.173018 kubelet[2913]: E0706 23:25:14.172879 2913 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.233:6443/api/v1/nodes\": dial tcp 172.31.21.233:6443: connect: connection refused" node="ip-172-31-21-233" Jul 6 23:25:14.271897 containerd[1994]: time="2025-07-06T23:25:14.271834933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-233,Uid:09379c513da07c3314cca58d325fa6e1,Namespace:kube-system,Attempt:0,}" Jul 6 23:25:14.287486 containerd[1994]: time="2025-07-06T23:25:14.287318581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-233,Uid:8a6faeed988ebde32d92abebdb81b7be,Namespace:kube-system,Attempt:0,}" Jul 6 23:25:14.296920 containerd[1994]: time="2025-07-06T23:25:14.296519617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-233,Uid:bce72c48f0cbace0398b1125db2d9665,Namespace:kube-system,Attempt:0,}" Jul 6 23:25:14.342628 containerd[1994]: time="2025-07-06T23:25:14.342439033Z" level=info msg="connecting to shim b6ead1d31260eaa3f8086ad0337c39f0d99fd6c2d92484ac6d5b4a834a14f769" address="unix:///run/containerd/s/cd1f0deeef74d22413461606752e165473009775e235acf40cef19924012a8ab" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:25:14.373616 containerd[1994]: time="2025-07-06T23:25:14.368977442Z" level=info msg="connecting to shim 02d59afca67265e3afcaddd2c87da8c80e6b809b839647755dd5321bac1d2fe9" address="unix:///run/containerd/s/31df8e4806a0506154b62fbdbe0b20adfabb5d4a94962b2fff2e935b9846e5bb" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:25:14.374343 kubelet[2913]: E0706 23:25:14.374294 2913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-233?timeout=10s\": dial tcp 172.31.21.233:6443: connect: connection refused" interval="800ms" Jul 6 23:25:14.429737 containerd[1994]: time="2025-07-06T23:25:14.429441506Z" level=info msg="connecting to shim 5983888cc377f765dac5d3ff4a0f4d979b75e3f8668baa717a1c50acba2576b1" address="unix:///run/containerd/s/323055f57fcccd431aca0d224d5bc393329f18c53d06eb05fb0863d88219aaeb" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:25:14.433215 systemd[1]: Started cri-containerd-b6ead1d31260eaa3f8086ad0337c39f0d99fd6c2d92484ac6d5b4a834a14f769.scope - libcontainer container b6ead1d31260eaa3f8086ad0337c39f0d99fd6c2d92484ac6d5b4a834a14f769. Jul 6 23:25:14.476239 systemd[1]: Started cri-containerd-02d59afca67265e3afcaddd2c87da8c80e6b809b839647755dd5321bac1d2fe9.scope - libcontainer container 02d59afca67265e3afcaddd2c87da8c80e6b809b839647755dd5321bac1d2fe9. Jul 6 23:25:14.515028 systemd[1]: Started cri-containerd-5983888cc377f765dac5d3ff4a0f4d979b75e3f8668baa717a1c50acba2576b1.scope - libcontainer container 5983888cc377f765dac5d3ff4a0f4d979b75e3f8668baa717a1c50acba2576b1. Jul 6 23:25:14.582305 kubelet[2913]: I0706 23:25:14.582230 2913 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-233" Jul 6 23:25:14.583541 kubelet[2913]: E0706 23:25:14.583455 2913 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.233:6443/api/v1/nodes\": dial tcp 172.31.21.233:6443: connect: connection refused" node="ip-172-31-21-233" Jul 6 23:25:14.623602 containerd[1994]: time="2025-07-06T23:25:14.623326887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-233,Uid:09379c513da07c3314cca58d325fa6e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6ead1d31260eaa3f8086ad0337c39f0d99fd6c2d92484ac6d5b4a834a14f769\"" Jul 6 23:25:14.641608 containerd[1994]: time="2025-07-06T23:25:14.641223711Z" level=info msg="CreateContainer within sandbox \"b6ead1d31260eaa3f8086ad0337c39f0d99fd6c2d92484ac6d5b4a834a14f769\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:25:14.645566 containerd[1994]: time="2025-07-06T23:25:14.645513279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-233,Uid:8a6faeed988ebde32d92abebdb81b7be,Namespace:kube-system,Attempt:0,} returns sandbox id \"02d59afca67265e3afcaddd2c87da8c80e6b809b839647755dd5321bac1d2fe9\"" Jul 6 23:25:14.659974 containerd[1994]: time="2025-07-06T23:25:14.659914527Z" level=info msg="CreateContainer within sandbox \"02d59afca67265e3afcaddd2c87da8c80e6b809b839647755dd5321bac1d2fe9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:25:14.664805 containerd[1994]: time="2025-07-06T23:25:14.664666983Z" level=info msg="Container 3ba5f35843b9f886e00f69f0c7858212fc07b030ba9148580aa5c036cbd2a195: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:25:14.665476 containerd[1994]: time="2025-07-06T23:25:14.665351571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-233,Uid:bce72c48f0cbace0398b1125db2d9665,Namespace:kube-system,Attempt:0,} returns sandbox id \"5983888cc377f765dac5d3ff4a0f4d979b75e3f8668baa717a1c50acba2576b1\"" Jul 6 23:25:14.677815 containerd[1994]: time="2025-07-06T23:25:14.677757039Z" level=info msg="CreateContainer within sandbox \"5983888cc377f765dac5d3ff4a0f4d979b75e3f8668baa717a1c50acba2576b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:25:14.682685 containerd[1994]: time="2025-07-06T23:25:14.682524447Z" level=info msg="Container 0fcf5ead3ac88c20af1c48efa251570bbe39d82f303904f3eb8e8409bdfca3e2: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:25:14.690498 containerd[1994]: time="2025-07-06T23:25:14.690389391Z" level=info msg="CreateContainer within sandbox \"b6ead1d31260eaa3f8086ad0337c39f0d99fd6c2d92484ac6d5b4a834a14f769\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3ba5f35843b9f886e00f69f0c7858212fc07b030ba9148580aa5c036cbd2a195\"" Jul 6 23:25:14.691809 containerd[1994]: time="2025-07-06T23:25:14.691750755Z" level=info msg="StartContainer for \"3ba5f35843b9f886e00f69f0c7858212fc07b030ba9148580aa5c036cbd2a195\"" Jul 6 23:25:14.695113 containerd[1994]: time="2025-07-06T23:25:14.694983135Z" level=info msg="connecting to shim 3ba5f35843b9f886e00f69f0c7858212fc07b030ba9148580aa5c036cbd2a195" address="unix:///run/containerd/s/cd1f0deeef74d22413461606752e165473009775e235acf40cef19924012a8ab" protocol=ttrpc version=3 Jul 6 23:25:14.697456 kubelet[2913]: E0706 23:25:14.697397 2913 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.21.233:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-233&limit=500&resourceVersion=0\": dial tcp 172.31.21.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:25:14.702886 containerd[1994]: time="2025-07-06T23:25:14.702785043Z" level=info msg="CreateContainer within sandbox \"02d59afca67265e3afcaddd2c87da8c80e6b809b839647755dd5321bac1d2fe9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0fcf5ead3ac88c20af1c48efa251570bbe39d82f303904f3eb8e8409bdfca3e2\"" Jul 6 23:25:14.704608 containerd[1994]: time="2025-07-06T23:25:14.704398119Z" level=info msg="StartContainer for \"0fcf5ead3ac88c20af1c48efa251570bbe39d82f303904f3eb8e8409bdfca3e2\"" Jul 6 23:25:14.706591 containerd[1994]: time="2025-07-06T23:25:14.706351131Z" level=info msg="connecting to shim 0fcf5ead3ac88c20af1c48efa251570bbe39d82f303904f3eb8e8409bdfca3e2" address="unix:///run/containerd/s/31df8e4806a0506154b62fbdbe0b20adfabb5d4a94962b2fff2e935b9846e5bb" protocol=ttrpc version=3 Jul 6 23:25:14.706991 containerd[1994]: time="2025-07-06T23:25:14.706715775Z" level=info msg="Container 4dfbdd937f9bedfd0fa02823ec4789e27a06b0fb0ea1826db8a042584f6b9424: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:25:14.733618 containerd[1994]: time="2025-07-06T23:25:14.733451583Z" level=info msg="CreateContainer within sandbox \"5983888cc377f765dac5d3ff4a0f4d979b75e3f8668baa717a1c50acba2576b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4dfbdd937f9bedfd0fa02823ec4789e27a06b0fb0ea1826db8a042584f6b9424\"" Jul 6 23:25:14.734651 containerd[1994]: time="2025-07-06T23:25:14.734252475Z" level=info msg="StartContainer for \"4dfbdd937f9bedfd0fa02823ec4789e27a06b0fb0ea1826db8a042584f6b9424\"" Jul 6 23:25:14.739526 containerd[1994]: time="2025-07-06T23:25:14.739460271Z" level=info msg="connecting to shim 4dfbdd937f9bedfd0fa02823ec4789e27a06b0fb0ea1826db8a042584f6b9424" address="unix:///run/containerd/s/323055f57fcccd431aca0d224d5bc393329f18c53d06eb05fb0863d88219aaeb" protocol=ttrpc version=3 Jul 6 23:25:14.746342 systemd[1]: Started cri-containerd-3ba5f35843b9f886e00f69f0c7858212fc07b030ba9148580aa5c036cbd2a195.scope - libcontainer container 3ba5f35843b9f886e00f69f0c7858212fc07b030ba9148580aa5c036cbd2a195. Jul 6 23:25:14.772140 systemd[1]: Started cri-containerd-0fcf5ead3ac88c20af1c48efa251570bbe39d82f303904f3eb8e8409bdfca3e2.scope - libcontainer container 0fcf5ead3ac88c20af1c48efa251570bbe39d82f303904f3eb8e8409bdfca3e2. Jul 6 23:25:14.792021 systemd[1]: Started cri-containerd-4dfbdd937f9bedfd0fa02823ec4789e27a06b0fb0ea1826db8a042584f6b9424.scope - libcontainer container 4dfbdd937f9bedfd0fa02823ec4789e27a06b0fb0ea1826db8a042584f6b9424. Jul 6 23:25:15.027752 containerd[1994]: time="2025-07-06T23:25:15.027656389Z" level=info msg="StartContainer for \"0fcf5ead3ac88c20af1c48efa251570bbe39d82f303904f3eb8e8409bdfca3e2\" returns successfully" Jul 6 23:25:15.036587 containerd[1994]: time="2025-07-06T23:25:15.036502645Z" level=info msg="StartContainer for \"4dfbdd937f9bedfd0fa02823ec4789e27a06b0fb0ea1826db8a042584f6b9424\" returns successfully" Jul 6 23:25:15.036825 containerd[1994]: time="2025-07-06T23:25:15.036793369Z" level=info msg="StartContainer for \"3ba5f35843b9f886e00f69f0c7858212fc07b030ba9148580aa5c036cbd2a195\" returns successfully" Jul 6 23:25:15.176096 kubelet[2913]: E0706 23:25:15.176010 2913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-233?timeout=10s\": dial tcp 172.31.21.233:6443: connect: connection refused" interval="1.6s" Jul 6 23:25:15.388521 kubelet[2913]: I0706 23:25:15.388394 2913 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-233" Jul 6 23:25:15.881616 kubelet[2913]: E0706 23:25:15.881323 2913 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-233\" not found" node="ip-172-31-21-233" Jul 6 23:25:15.886516 kubelet[2913]: E0706 23:25:15.886483 2913 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-233\" not found" node="ip-172-31-21-233" Jul 6 23:25:15.895595 kubelet[2913]: E0706 23:25:15.894406 2913 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-233\" not found" node="ip-172-31-21-233" Jul 6 23:25:16.108431 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 6 23:25:16.898691 kubelet[2913]: E0706 23:25:16.898226 2913 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-233\" not found" node="ip-172-31-21-233" Jul 6 23:25:16.899893 kubelet[2913]: E0706 23:25:16.899864 2913 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-233\" not found" node="ip-172-31-21-233" Jul 6 23:25:16.900449 kubelet[2913]: E0706 23:25:16.900047 2913 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-233\" not found" node="ip-172-31-21-233" Jul 6 23:25:17.898930 kubelet[2913]: E0706 23:25:17.898097 2913 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-233\" not found" node="ip-172-31-21-233" Jul 6 23:25:17.901038 kubelet[2913]: E0706 23:25:17.900065 2913 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-233\" not found" node="ip-172-31-21-233" Jul 6 23:25:18.421046 kubelet[2913]: E0706 23:25:18.420977 2913 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-233\" not found" node="ip-172-31-21-233" Jul 6 23:25:18.589217 kubelet[2913]: I0706 23:25:18.588786 2913 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-233" Jul 6 23:25:18.669435 kubelet[2913]: I0706 23:25:18.669391 2913 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-233" Jul 6 23:25:18.686021 kubelet[2913]: E0706 23:25:18.685894 2913 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-233\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-21-233" Jul 6 23:25:18.686383 kubelet[2913]: I0706 23:25:18.686192 2913 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-233" Jul 6 23:25:18.696906 kubelet[2913]: E0706 23:25:18.695921 2913 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-233\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-21-233" Jul 6 23:25:18.696906 kubelet[2913]: I0706 23:25:18.695965 2913 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-233" Jul 6 23:25:18.703530 kubelet[2913]: E0706 23:25:18.703491 2913 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-233\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-21-233" Jul 6 23:25:18.725641 kubelet[2913]: I0706 23:25:18.725420 2913 apiserver.go:52] "Watching apiserver" Jul 6 23:25:18.763713 kubelet[2913]: I0706 23:25:18.762753 2913 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:25:18.898051 kubelet[2913]: I0706 23:25:18.897857 2913 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-233" Jul 6 23:25:18.907344 kubelet[2913]: E0706 23:25:18.907066 2913 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-233\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-21-233" Jul 6 23:25:20.039419 kubelet[2913]: I0706 23:25:20.037898 2913 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-233" Jul 6 23:25:21.054838 systemd[1]: Reload requested from client PID 3197 ('systemctl') (unit session-7.scope)... Jul 6 23:25:21.054872 systemd[1]: Reloading... Jul 6 23:25:21.247653 zram_generator::config[3241]: No configuration found. Jul 6 23:25:21.439975 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:25:21.734343 systemd[1]: Reloading finished in 678 ms. Jul 6 23:25:21.806085 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:25:21.823207 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:25:21.823796 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:25:21.823890 systemd[1]: kubelet.service: Consumed 1.875s CPU time, 126.8M memory peak. Jul 6 23:25:21.828358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:25:22.192403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:25:22.211314 (kubelet)[3301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:25:22.316694 kubelet[3301]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:25:22.316694 kubelet[3301]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:25:22.316694 kubelet[3301]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:25:22.316694 kubelet[3301]: I0706 23:25:22.316255 3301 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:25:22.332507 kubelet[3301]: I0706 23:25:22.332448 3301 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:25:22.332507 kubelet[3301]: I0706 23:25:22.332498 3301 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:25:22.332952 kubelet[3301]: I0706 23:25:22.332921 3301 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:25:22.335256 kubelet[3301]: I0706 23:25:22.335209 3301 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 6 23:25:22.339659 kubelet[3301]: I0706 23:25:22.339474 3301 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:25:22.354847 kubelet[3301]: I0706 23:25:22.354800 3301 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:25:22.360288 kubelet[3301]: I0706 23:25:22.360237 3301 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:25:22.360777 kubelet[3301]: I0706 23:25:22.360726 3301 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:25:22.361032 kubelet[3301]: I0706 23:25:22.360773 3301 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-233","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:25:22.361184 kubelet[3301]: I0706 23:25:22.361042 3301 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:25:22.361184 kubelet[3301]: I0706 23:25:22.361063 3301 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:25:22.361184 kubelet[3301]: I0706 23:25:22.361144 3301 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:25:22.361466 kubelet[3301]: I0706 23:25:22.361436 3301 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:25:22.362478 kubelet[3301]: I0706 23:25:22.362187 3301 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:25:22.362478 kubelet[3301]: I0706 23:25:22.362262 3301 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:25:22.362478 kubelet[3301]: I0706 23:25:22.362297 3301 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:25:22.368382 kubelet[3301]: I0706 23:25:22.367648 3301 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:25:22.370049 kubelet[3301]: I0706 23:25:22.369266 3301 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:25:22.376212 kubelet[3301]: I0706 23:25:22.374866 3301 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:25:22.376212 kubelet[3301]: I0706 23:25:22.374954 3301 server.go:1289] "Started kubelet" Jul 6 23:25:22.383679 kubelet[3301]: I0706 23:25:22.383093 3301 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:25:22.388674 kubelet[3301]: I0706 23:25:22.388411 3301 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:25:22.389942 kubelet[3301]: I0706 23:25:22.389716 3301 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:25:22.410201 kubelet[3301]: I0706 23:25:22.407542 3301 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:25:22.410201 kubelet[3301]: I0706 23:25:22.398987 3301 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:25:22.411555 kubelet[3301]: I0706 23:25:22.395942 3301 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:25:22.411982 kubelet[3301]: I0706 23:25:22.399005 3301 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:25:22.411982 kubelet[3301]: I0706 23:25:22.407259 3301 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:25:22.421666 kubelet[3301]: I0706 23:25:22.419270 3301 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:25:22.421666 kubelet[3301]: E0706 23:25:22.399200 3301 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-233\" not found" Jul 6 23:25:22.432706 kubelet[3301]: I0706 23:25:22.432667 3301 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:25:22.444473 kubelet[3301]: I0706 23:25:22.444221 3301 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:25:22.501474 kubelet[3301]: I0706 23:25:22.500017 3301 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:25:22.502521 kubelet[3301]: I0706 23:25:22.502486 3301 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:25:22.510051 kubelet[3301]: I0706 23:25:22.509760 3301 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:25:22.510051 kubelet[3301]: I0706 23:25:22.509804 3301 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:25:22.510051 kubelet[3301]: I0706 23:25:22.509837 3301 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:25:22.510051 kubelet[3301]: I0706 23:25:22.509850 3301 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:25:22.510051 kubelet[3301]: E0706 23:25:22.509914 3301 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:25:22.532223 kubelet[3301]: E0706 23:25:22.532145 3301 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:25:22.610940 kubelet[3301]: E0706 23:25:22.610868 3301 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:25:22.628764 kubelet[3301]: I0706 23:25:22.628720 3301 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:25:22.628764 kubelet[3301]: I0706 23:25:22.628752 3301 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:25:22.628968 kubelet[3301]: I0706 23:25:22.628788 3301 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:25:22.629025 kubelet[3301]: I0706 23:25:22.629012 3301 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:25:22.629079 kubelet[3301]: I0706 23:25:22.629031 3301 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:25:22.629079 kubelet[3301]: I0706 23:25:22.629061 3301 policy_none.go:49] "None policy: Start" Jul 6 23:25:22.629079 kubelet[3301]: I0706 23:25:22.629079 3301 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:25:22.629232 kubelet[3301]: I0706 23:25:22.629097 3301 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:25:22.629285 kubelet[3301]: I0706 23:25:22.629260 3301 state_mem.go:75] "Updated machine memory state" Jul 6 23:25:22.641425 kubelet[3301]: E0706 23:25:22.640277 3301 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:25:22.641425 kubelet[3301]: I0706 23:25:22.640552 3301 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:25:22.641425 kubelet[3301]: I0706 23:25:22.640603 3301 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:25:22.642823 kubelet[3301]: I0706 23:25:22.642635 3301 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:25:22.652006 kubelet[3301]: E0706 23:25:22.648979 3301 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:25:22.756097 kubelet[3301]: I0706 23:25:22.755625 3301 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-233" Jul 6 23:25:22.770808 kubelet[3301]: I0706 23:25:22.770741 3301 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-21-233" Jul 6 23:25:22.770978 kubelet[3301]: I0706 23:25:22.770919 3301 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-233" Jul 6 23:25:22.814288 kubelet[3301]: I0706 23:25:22.813882 3301 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-233" Jul 6 23:25:22.814890 kubelet[3301]: I0706 23:25:22.814835 3301 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-233" Jul 6 23:25:22.816491 kubelet[3301]: I0706 23:25:22.816447 3301 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-233" Jul 6 23:25:22.827010 kubelet[3301]: I0706 23:25:22.826963 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a6faeed988ebde32d92abebdb81b7be-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-233\" (UID: \"8a6faeed988ebde32d92abebdb81b7be\") " pod="kube-system/kube-controller-manager-ip-172-31-21-233" Jul 6 23:25:22.827289 kubelet[3301]: I0706 23:25:22.827199 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a6faeed988ebde32d92abebdb81b7be-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-233\" (UID: \"8a6faeed988ebde32d92abebdb81b7be\") " pod="kube-system/kube-controller-manager-ip-172-31-21-233" Jul 6 23:25:22.827289 kubelet[3301]: I0706 23:25:22.827246 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a6faeed988ebde32d92abebdb81b7be-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-233\" (UID: \"8a6faeed988ebde32d92abebdb81b7be\") " pod="kube-system/kube-controller-manager-ip-172-31-21-233" Jul 6 23:25:22.828374 kubelet[3301]: I0706 23:25:22.827320 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bce72c48f0cbace0398b1125db2d9665-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-233\" (UID: \"bce72c48f0cbace0398b1125db2d9665\") " pod="kube-system/kube-scheduler-ip-172-31-21-233" Jul 6 23:25:22.828374 kubelet[3301]: I0706 23:25:22.827369 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a6faeed988ebde32d92abebdb81b7be-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-233\" (UID: \"8a6faeed988ebde32d92abebdb81b7be\") " pod="kube-system/kube-controller-manager-ip-172-31-21-233" Jul 6 23:25:22.828374 kubelet[3301]: I0706 23:25:22.827408 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a6faeed988ebde32d92abebdb81b7be-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-233\" (UID: \"8a6faeed988ebde32d92abebdb81b7be\") " pod="kube-system/kube-controller-manager-ip-172-31-21-233" Jul 6 23:25:22.834187 kubelet[3301]: E0706 23:25:22.833567 3301 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-233\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-233" Jul 6 23:25:22.928831 kubelet[3301]: I0706 23:25:22.928562 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09379c513da07c3314cca58d325fa6e1-ca-certs\") pod \"kube-apiserver-ip-172-31-21-233\" (UID: \"09379c513da07c3314cca58d325fa6e1\") " pod="kube-system/kube-apiserver-ip-172-31-21-233" Jul 6 23:25:22.928831 kubelet[3301]: I0706 23:25:22.928671 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09379c513da07c3314cca58d325fa6e1-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-233\" (UID: \"09379c513da07c3314cca58d325fa6e1\") " pod="kube-system/kube-apiserver-ip-172-31-21-233" Jul 6 23:25:22.928831 kubelet[3301]: I0706 23:25:22.928707 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09379c513da07c3314cca58d325fa6e1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-233\" (UID: \"09379c513da07c3314cca58d325fa6e1\") " pod="kube-system/kube-apiserver-ip-172-31-21-233" Jul 6 23:25:23.391160 kubelet[3301]: I0706 23:25:23.391048 3301 apiserver.go:52] "Watching apiserver" Jul 6 23:25:23.413602 kubelet[3301]: I0706 23:25:23.412780 3301 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:25:23.563777 kubelet[3301]: I0706 23:25:23.563723 3301 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-233" Jul 6 23:25:23.584879 kubelet[3301]: E0706 23:25:23.584830 3301 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-233\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-21-233" Jul 6 23:25:23.658685 kubelet[3301]: I0706 23:25:23.658263 3301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-233" podStartSLOduration=1.6582414239999999 podStartE2EDuration="1.658241424s" podCreationTimestamp="2025-07-06 23:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:25:23.615968951 +0000 UTC m=+1.393673407" watchObservedRunningTime="2025-07-06 23:25:23.658241424 +0000 UTC m=+1.435945868" Jul 6 23:25:23.658685 kubelet[3301]: I0706 23:25:23.658458 3301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-233" podStartSLOduration=1.65844876 podStartE2EDuration="1.65844876s" podCreationTimestamp="2025-07-06 23:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:25:23.658417236 +0000 UTC m=+1.436121680" watchObservedRunningTime="2025-07-06 23:25:23.65844876 +0000 UTC m=+1.436153228" Jul 6 23:25:23.684851 kubelet[3301]: I0706 23:25:23.684370 3301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-233" podStartSLOduration=3.684351168 podStartE2EDuration="3.684351168s" podCreationTimestamp="2025-07-06 23:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:25:23.682010016 +0000 UTC m=+1.459714460" watchObservedRunningTime="2025-07-06 23:25:23.684351168 +0000 UTC m=+1.462055624" Jul 6 23:25:28.130711 kubelet[3301]: I0706 23:25:28.130624 3301 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:25:28.132104 containerd[1994]: time="2025-07-06T23:25:28.132034226Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:25:28.133121 kubelet[3301]: I0706 23:25:28.132947 3301 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:25:28.747974 systemd[1]: Created slice kubepods-besteffort-podcb33e682_9883_4840_93d0_7f7a34fc57a0.slice - libcontainer container kubepods-besteffort-podcb33e682_9883_4840_93d0_7f7a34fc57a0.slice. Jul 6 23:25:28.762255 kubelet[3301]: I0706 23:25:28.762156 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb33e682-9883-4840-93d0-7f7a34fc57a0-kube-proxy\") pod \"kube-proxy-r2jvr\" (UID: \"cb33e682-9883-4840-93d0-7f7a34fc57a0\") " pod="kube-system/kube-proxy-r2jvr" Jul 6 23:25:28.762255 kubelet[3301]: I0706 23:25:28.762222 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb33e682-9883-4840-93d0-7f7a34fc57a0-xtables-lock\") pod \"kube-proxy-r2jvr\" (UID: \"cb33e682-9883-4840-93d0-7f7a34fc57a0\") " pod="kube-system/kube-proxy-r2jvr" Jul 6 23:25:28.762448 kubelet[3301]: I0706 23:25:28.762262 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb33e682-9883-4840-93d0-7f7a34fc57a0-lib-modules\") pod \"kube-proxy-r2jvr\" (UID: \"cb33e682-9883-4840-93d0-7f7a34fc57a0\") " pod="kube-system/kube-proxy-r2jvr" Jul 6 23:25:28.762448 kubelet[3301]: I0706 23:25:28.762297 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxthm\" (UniqueName: \"kubernetes.io/projected/cb33e682-9883-4840-93d0-7f7a34fc57a0-kube-api-access-xxthm\") pod \"kube-proxy-r2jvr\" (UID: \"cb33e682-9883-4840-93d0-7f7a34fc57a0\") " pod="kube-system/kube-proxy-r2jvr" Jul 6 23:25:29.069212 containerd[1994]: time="2025-07-06T23:25:29.068991039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r2jvr,Uid:cb33e682-9883-4840-93d0-7f7a34fc57a0,Namespace:kube-system,Attempt:0,}" Jul 6 23:25:29.111021 containerd[1994]: time="2025-07-06T23:25:29.110234151Z" level=info msg="connecting to shim c20874d0736d1eeded367da339bd2c71a626de50a71a9ef63a0ca4e481337128" address="unix:///run/containerd/s/fd6c80380c8fff0165a451fdd2de6f0fd3d2560bc0a57d4b49527712beea593a" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:25:29.160942 systemd[1]: Started cri-containerd-c20874d0736d1eeded367da339bd2c71a626de50a71a9ef63a0ca4e481337128.scope - libcontainer container c20874d0736d1eeded367da339bd2c71a626de50a71a9ef63a0ca4e481337128. Jul 6 23:25:29.246891 systemd[1]: Created slice kubepods-besteffort-pod904b00c6_423c_4959_9b02_8acb8e345f83.slice - libcontainer container kubepods-besteffort-pod904b00c6_423c_4959_9b02_8acb8e345f83.slice. Jul 6 23:25:29.266807 kubelet[3301]: I0706 23:25:29.266736 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zhb7\" (UniqueName: \"kubernetes.io/projected/904b00c6-423c-4959-9b02-8acb8e345f83-kube-api-access-5zhb7\") pod \"tigera-operator-747864d56d-6fn94\" (UID: \"904b00c6-423c-4959-9b02-8acb8e345f83\") " pod="tigera-operator/tigera-operator-747864d56d-6fn94" Jul 6 23:25:29.267611 kubelet[3301]: I0706 23:25:29.267476 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/904b00c6-423c-4959-9b02-8acb8e345f83-var-lib-calico\") pod \"tigera-operator-747864d56d-6fn94\" (UID: \"904b00c6-423c-4959-9b02-8acb8e345f83\") " pod="tigera-operator/tigera-operator-747864d56d-6fn94" Jul 6 23:25:29.314612 containerd[1994]: time="2025-07-06T23:25:29.314475832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r2jvr,Uid:cb33e682-9883-4840-93d0-7f7a34fc57a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c20874d0736d1eeded367da339bd2c71a626de50a71a9ef63a0ca4e481337128\"" Jul 6 23:25:29.325702 containerd[1994]: time="2025-07-06T23:25:29.325147336Z" level=info msg="CreateContainer within sandbox \"c20874d0736d1eeded367da339bd2c71a626de50a71a9ef63a0ca4e481337128\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:25:29.351149 containerd[1994]: time="2025-07-06T23:25:29.350969812Z" level=info msg="Container 49ace6e742a1db665d416b20fc5d3fabfdb9967e30d6e4e6796b0e36bdfa13a8: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:25:29.374782 containerd[1994]: time="2025-07-06T23:25:29.374680564Z" level=info msg="CreateContainer within sandbox \"c20874d0736d1eeded367da339bd2c71a626de50a71a9ef63a0ca4e481337128\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"49ace6e742a1db665d416b20fc5d3fabfdb9967e30d6e4e6796b0e36bdfa13a8\"" Jul 6 23:25:29.377889 containerd[1994]: time="2025-07-06T23:25:29.377421364Z" level=info msg="StartContainer for \"49ace6e742a1db665d416b20fc5d3fabfdb9967e30d6e4e6796b0e36bdfa13a8\"" Jul 6 23:25:29.381163 containerd[1994]: time="2025-07-06T23:25:29.381074020Z" level=info msg="connecting to shim 49ace6e742a1db665d416b20fc5d3fabfdb9967e30d6e4e6796b0e36bdfa13a8" address="unix:///run/containerd/s/fd6c80380c8fff0165a451fdd2de6f0fd3d2560bc0a57d4b49527712beea593a" protocol=ttrpc version=3 Jul 6 23:25:29.420974 systemd[1]: Started cri-containerd-49ace6e742a1db665d416b20fc5d3fabfdb9967e30d6e4e6796b0e36bdfa13a8.scope - libcontainer container 49ace6e742a1db665d416b20fc5d3fabfdb9967e30d6e4e6796b0e36bdfa13a8. Jul 6 23:25:29.507020 containerd[1994]: time="2025-07-06T23:25:29.506878973Z" level=info msg="StartContainer for \"49ace6e742a1db665d416b20fc5d3fabfdb9967e30d6e4e6796b0e36bdfa13a8\" returns successfully" Jul 6 23:25:29.559841 containerd[1994]: time="2025-07-06T23:25:29.558995393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-6fn94,Uid:904b00c6-423c-4959-9b02-8acb8e345f83,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:25:29.626456 kubelet[3301]: I0706 23:25:29.626131 3301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r2jvr" podStartSLOduration=1.626106161 podStartE2EDuration="1.626106161s" podCreationTimestamp="2025-07-06 23:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:25:29.625002629 +0000 UTC m=+7.402707073" watchObservedRunningTime="2025-07-06 23:25:29.626106161 +0000 UTC m=+7.403810593" Jul 6 23:25:29.632668 containerd[1994]: time="2025-07-06T23:25:29.632593121Z" level=info msg="connecting to shim 862da268c2c60297d1df124926ca2d1527b7babce2834ec0ef492881ada8cc0a" address="unix:///run/containerd/s/3d6318f0a40fddb509cbb898205566b755f88c8b8ebee98a51f9c101281aaf8b" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:25:29.677895 systemd[1]: Started cri-containerd-862da268c2c60297d1df124926ca2d1527b7babce2834ec0ef492881ada8cc0a.scope - libcontainer container 862da268c2c60297d1df124926ca2d1527b7babce2834ec0ef492881ada8cc0a. Jul 6 23:25:29.772866 containerd[1994]: time="2025-07-06T23:25:29.772801842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-6fn94,Uid:904b00c6-423c-4959-9b02-8acb8e345f83,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"862da268c2c60297d1df124926ca2d1527b7babce2834ec0ef492881ada8cc0a\"" Jul 6 23:25:29.777162 containerd[1994]: time="2025-07-06T23:25:29.777073506Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:25:30.201972 update_engine[1975]: I20250706 23:25:30.201895 1975 update_attempter.cc:509] Updating boot flags... Jul 6 23:25:31.054153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1221957365.mount: Deactivated successfully. Jul 6 23:25:31.898054 containerd[1994]: time="2025-07-06T23:25:31.897712809Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:31.899105 containerd[1994]: time="2025-07-06T23:25:31.899053449Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 6 23:25:31.900054 containerd[1994]: time="2025-07-06T23:25:31.899985945Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:31.903540 containerd[1994]: time="2025-07-06T23:25:31.903460557Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:31.905066 containerd[1994]: time="2025-07-06T23:25:31.904864521Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.127699203s" Jul 6 23:25:31.905066 containerd[1994]: time="2025-07-06T23:25:31.904918533Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 6 23:25:31.913251 containerd[1994]: time="2025-07-06T23:25:31.912913821Z" level=info msg="CreateContainer within sandbox \"862da268c2c60297d1df124926ca2d1527b7babce2834ec0ef492881ada8cc0a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:25:31.929602 containerd[1994]: time="2025-07-06T23:25:31.926798325Z" level=info msg="Container 43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:25:31.938320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1294252957.mount: Deactivated successfully. Jul 6 23:25:31.940101 containerd[1994]: time="2025-07-06T23:25:31.938514945Z" level=info msg="CreateContainer within sandbox \"862da268c2c60297d1df124926ca2d1527b7babce2834ec0ef492881ada8cc0a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849\"" Jul 6 23:25:31.942055 containerd[1994]: time="2025-07-06T23:25:31.941938953Z" level=info msg="StartContainer for \"43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849\"" Jul 6 23:25:31.947364 containerd[1994]: time="2025-07-06T23:25:31.947248233Z" level=info msg="connecting to shim 43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849" address="unix:///run/containerd/s/3d6318f0a40fddb509cbb898205566b755f88c8b8ebee98a51f9c101281aaf8b" protocol=ttrpc version=3 Jul 6 23:25:31.997906 systemd[1]: Started cri-containerd-43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849.scope - libcontainer container 43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849. Jul 6 23:25:32.053798 containerd[1994]: time="2025-07-06T23:25:32.053739113Z" level=info msg="StartContainer for \"43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849\" returns successfully" Jul 6 23:25:32.651742 kubelet[3301]: I0706 23:25:32.651488 3301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-6fn94" podStartSLOduration=1.5208427850000001 podStartE2EDuration="3.651469256s" podCreationTimestamp="2025-07-06 23:25:29 +0000 UTC" firstStartedPulling="2025-07-06 23:25:29.775991982 +0000 UTC m=+7.553696414" lastFinishedPulling="2025-07-06 23:25:31.906618453 +0000 UTC m=+9.684322885" observedRunningTime="2025-07-06 23:25:32.651077708 +0000 UTC m=+10.428782164" watchObservedRunningTime="2025-07-06 23:25:32.651469256 +0000 UTC m=+10.429173688" Jul 6 23:25:39.127103 sudo[2339]: pam_unix(sudo:session): session closed for user root Jul 6 23:25:39.158555 sshd[2338]: Connection closed by 147.75.109.163 port 45020 Jul 6 23:25:39.159363 sshd-session[2336]: pam_unix(sshd:session): session closed for user core Jul 6 23:25:39.167311 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:25:39.168723 systemd[1]: session-7.scope: Consumed 10.397s CPU time, 235.1M memory peak. Jul 6 23:25:39.172408 systemd[1]: sshd@6-172.31.21.233:22-147.75.109.163:45020.service: Deactivated successfully. Jul 6 23:25:39.185796 systemd-logind[1972]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:25:39.193987 systemd-logind[1972]: Removed session 7. Jul 6 23:25:51.527183 systemd[1]: Created slice kubepods-besteffort-pod82eb5eed_84f3_463b_8a70_b982417ccc3b.slice - libcontainer container kubepods-besteffort-pod82eb5eed_84f3_463b_8a70_b982417ccc3b.slice. Jul 6 23:25:51.629389 kubelet[3301]: I0706 23:25:51.629311 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtq8m\" (UniqueName: \"kubernetes.io/projected/82eb5eed-84f3-463b-8a70-b982417ccc3b-kube-api-access-jtq8m\") pod \"calico-typha-cc589755-sn5r5\" (UID: \"82eb5eed-84f3-463b-8a70-b982417ccc3b\") " pod="calico-system/calico-typha-cc589755-sn5r5" Jul 6 23:25:51.630014 kubelet[3301]: I0706 23:25:51.629392 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/82eb5eed-84f3-463b-8a70-b982417ccc3b-typha-certs\") pod \"calico-typha-cc589755-sn5r5\" (UID: \"82eb5eed-84f3-463b-8a70-b982417ccc3b\") " pod="calico-system/calico-typha-cc589755-sn5r5" Jul 6 23:25:51.630014 kubelet[3301]: I0706 23:25:51.629444 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82eb5eed-84f3-463b-8a70-b982417ccc3b-tigera-ca-bundle\") pod \"calico-typha-cc589755-sn5r5\" (UID: \"82eb5eed-84f3-463b-8a70-b982417ccc3b\") " pod="calico-system/calico-typha-cc589755-sn5r5" Jul 6 23:25:51.838122 containerd[1994]: time="2025-07-06T23:25:51.837466324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cc589755-sn5r5,Uid:82eb5eed-84f3-463b-8a70-b982417ccc3b,Namespace:calico-system,Attempt:0,}" Jul 6 23:25:51.912274 containerd[1994]: time="2025-07-06T23:25:51.912202828Z" level=info msg="connecting to shim fbb6a9b137e5142af78d0c71e980ab4d0e9ec2a36b91d003eef543ca1587cd09" address="unix:///run/containerd/s/8b156cb8fe1693a4a569f151ec30c69e66eddc7a2407bc7c4ffde18363a56eb0" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:25:51.983012 systemd[1]: Started cri-containerd-fbb6a9b137e5142af78d0c71e980ab4d0e9ec2a36b91d003eef543ca1587cd09.scope - libcontainer container fbb6a9b137e5142af78d0c71e980ab4d0e9ec2a36b91d003eef543ca1587cd09. Jul 6 23:25:52.045364 systemd[1]: Created slice kubepods-besteffort-pod63ac1d45_7f1c_4d88_b834_e3139e767e62.slice - libcontainer container kubepods-besteffort-pod63ac1d45_7f1c_4d88_b834_e3139e767e62.slice. Jul 6 23:25:52.136793 kubelet[3301]: I0706 23:25:52.136375 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/63ac1d45-7f1c-4d88-b834-e3139e767e62-flexvol-driver-host\") pod \"calico-node-jhkjn\" (UID: \"63ac1d45-7f1c-4d88-b834-e3139e767e62\") " pod="calico-system/calico-node-jhkjn" Jul 6 23:25:52.136793 kubelet[3301]: I0706 23:25:52.136438 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/63ac1d45-7f1c-4d88-b834-e3139e767e62-var-run-calico\") pod \"calico-node-jhkjn\" (UID: \"63ac1d45-7f1c-4d88-b834-e3139e767e62\") " pod="calico-system/calico-node-jhkjn" Jul 6 23:25:52.136793 kubelet[3301]: I0706 23:25:52.136481 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/63ac1d45-7f1c-4d88-b834-e3139e767e62-policysync\") pod \"calico-node-jhkjn\" (UID: \"63ac1d45-7f1c-4d88-b834-e3139e767e62\") " pod="calico-system/calico-node-jhkjn" Jul 6 23:25:52.136793 kubelet[3301]: I0706 23:25:52.136529 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/63ac1d45-7f1c-4d88-b834-e3139e767e62-var-lib-calico\") pod \"calico-node-jhkjn\" (UID: \"63ac1d45-7f1c-4d88-b834-e3139e767e62\") " pod="calico-system/calico-node-jhkjn" Jul 6 23:25:52.138873 kubelet[3301]: I0706 23:25:52.138644 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63ac1d45-7f1c-4d88-b834-e3139e767e62-tigera-ca-bundle\") pod \"calico-node-jhkjn\" (UID: \"63ac1d45-7f1c-4d88-b834-e3139e767e62\") " pod="calico-system/calico-node-jhkjn" Jul 6 23:25:52.138873 kubelet[3301]: I0706 23:25:52.138740 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/63ac1d45-7f1c-4d88-b834-e3139e767e62-cni-net-dir\") pod \"calico-node-jhkjn\" (UID: \"63ac1d45-7f1c-4d88-b834-e3139e767e62\") " pod="calico-system/calico-node-jhkjn" Jul 6 23:25:52.139311 kubelet[3301]: I0706 23:25:52.138835 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdzgm\" (UniqueName: \"kubernetes.io/projected/63ac1d45-7f1c-4d88-b834-e3139e767e62-kube-api-access-mdzgm\") pod \"calico-node-jhkjn\" (UID: \"63ac1d45-7f1c-4d88-b834-e3139e767e62\") " pod="calico-system/calico-node-jhkjn" Jul 6 23:25:52.140145 kubelet[3301]: I0706 23:25:52.139428 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/63ac1d45-7f1c-4d88-b834-e3139e767e62-cni-log-dir\") pod \"calico-node-jhkjn\" (UID: \"63ac1d45-7f1c-4d88-b834-e3139e767e62\") " pod="calico-system/calico-node-jhkjn" Jul 6 23:25:52.140145 kubelet[3301]: I0706 23:25:52.139918 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63ac1d45-7f1c-4d88-b834-e3139e767e62-lib-modules\") pod \"calico-node-jhkjn\" (UID: \"63ac1d45-7f1c-4d88-b834-e3139e767e62\") " pod="calico-system/calico-node-jhkjn" Jul 6 23:25:52.140145 kubelet[3301]: I0706 23:25:52.139970 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/63ac1d45-7f1c-4d88-b834-e3139e767e62-node-certs\") pod \"calico-node-jhkjn\" (UID: \"63ac1d45-7f1c-4d88-b834-e3139e767e62\") " pod="calico-system/calico-node-jhkjn" Jul 6 23:25:52.140145 kubelet[3301]: I0706 23:25:52.140004 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63ac1d45-7f1c-4d88-b834-e3139e767e62-xtables-lock\") pod \"calico-node-jhkjn\" (UID: \"63ac1d45-7f1c-4d88-b834-e3139e767e62\") " pod="calico-system/calico-node-jhkjn" Jul 6 23:25:52.140145 kubelet[3301]: I0706 23:25:52.140039 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/63ac1d45-7f1c-4d88-b834-e3139e767e62-cni-bin-dir\") pod \"calico-node-jhkjn\" (UID: \"63ac1d45-7f1c-4d88-b834-e3139e767e62\") " pod="calico-system/calico-node-jhkjn" Jul 6 23:25:52.232018 containerd[1994]: time="2025-07-06T23:25:52.231818390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cc589755-sn5r5,Uid:82eb5eed-84f3-463b-8a70-b982417ccc3b,Namespace:calico-system,Attempt:0,} returns sandbox id \"fbb6a9b137e5142af78d0c71e980ab4d0e9ec2a36b91d003eef543ca1587cd09\"" Jul 6 23:25:52.238138 containerd[1994]: time="2025-07-06T23:25:52.238080422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:25:52.275049 kubelet[3301]: E0706 23:25:52.274735 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.281962 kubelet[3301]: W0706 23:25:52.279993 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.281962 kubelet[3301]: E0706 23:25:52.281679 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.282628 kubelet[3301]: E0706 23:25:52.282500 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.282885 kubelet[3301]: W0706 23:25:52.282855 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.283011 kubelet[3301]: E0706 23:25:52.282983 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.287629 kubelet[3301]: E0706 23:25:52.287157 3301 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mhl8x" podUID="4b45ea5b-9b83-404d-8a3d-f277c8e8af9a" Jul 6 23:25:52.302885 kubelet[3301]: E0706 23:25:52.302847 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.303088 kubelet[3301]: W0706 23:25:52.303061 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.303203 kubelet[3301]: E0706 23:25:52.303181 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.303800 kubelet[3301]: E0706 23:25:52.303559 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.303800 kubelet[3301]: W0706 23:25:52.303613 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.303800 kubelet[3301]: E0706 23:25:52.303678 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.304151 kubelet[3301]: E0706 23:25:52.304129 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.304261 kubelet[3301]: W0706 23:25:52.304239 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.304369 kubelet[3301]: E0706 23:25:52.304348 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.305966 kubelet[3301]: E0706 23:25:52.305705 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.305966 kubelet[3301]: W0706 23:25:52.305737 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.305966 kubelet[3301]: E0706 23:25:52.305766 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.306867 kubelet[3301]: E0706 23:25:52.306835 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.307283 kubelet[3301]: W0706 23:25:52.307211 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.307283 kubelet[3301]: E0706 23:25:52.307250 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.309615 kubelet[3301]: E0706 23:25:52.307940 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.309615 kubelet[3301]: W0706 23:25:52.307965 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.309615 kubelet[3301]: E0706 23:25:52.307990 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.310366 kubelet[3301]: E0706 23:25:52.310159 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.310366 kubelet[3301]: W0706 23:25:52.310189 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.310366 kubelet[3301]: E0706 23:25:52.310219 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.310735 kubelet[3301]: E0706 23:25:52.310713 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.310868 kubelet[3301]: W0706 23:25:52.310844 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.310978 kubelet[3301]: E0706 23:25:52.310956 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.311413 kubelet[3301]: E0706 23:25:52.311384 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.311720 kubelet[3301]: W0706 23:25:52.311515 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.311720 kubelet[3301]: E0706 23:25:52.311545 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.311992 kubelet[3301]: E0706 23:25:52.311970 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.312125 kubelet[3301]: W0706 23:25:52.312101 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.312229 kubelet[3301]: E0706 23:25:52.312208 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.313947 kubelet[3301]: E0706 23:25:52.313768 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.313947 kubelet[3301]: W0706 23:25:52.313805 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.313947 kubelet[3301]: E0706 23:25:52.313837 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.314457 kubelet[3301]: E0706 23:25:52.314436 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.314744 kubelet[3301]: W0706 23:25:52.314560 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.314744 kubelet[3301]: E0706 23:25:52.314643 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.315181 kubelet[3301]: E0706 23:25:52.315155 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.316695 kubelet[3301]: W0706 23:25:52.316651 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.316882 kubelet[3301]: E0706 23:25:52.316857 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.317348 kubelet[3301]: E0706 23:25:52.317326 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.317567 kubelet[3301]: W0706 23:25:52.317466 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.317567 kubelet[3301]: E0706 23:25:52.317498 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.317973 kubelet[3301]: E0706 23:25:52.317949 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.318150 kubelet[3301]: W0706 23:25:52.318054 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.318150 kubelet[3301]: E0706 23:25:52.318079 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.318481 kubelet[3301]: E0706 23:25:52.318462 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.318614 kubelet[3301]: W0706 23:25:52.318593 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.319050 kubelet[3301]: E0706 23:25:52.319020 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.322100 kubelet[3301]: E0706 23:25:52.321759 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.322100 kubelet[3301]: W0706 23:25:52.321793 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.322100 kubelet[3301]: E0706 23:25:52.321824 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.322518 kubelet[3301]: E0706 23:25:52.322490 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.322791 kubelet[3301]: W0706 23:25:52.322687 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.322791 kubelet[3301]: E0706 23:25:52.322729 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.324474 kubelet[3301]: E0706 23:25:52.324372 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.324474 kubelet[3301]: W0706 23:25:52.324409 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.324474 kubelet[3301]: E0706 23:25:52.324441 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.325212 kubelet[3301]: E0706 23:25:52.325187 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.325662 kubelet[3301]: W0706 23:25:52.325592 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.325662 kubelet[3301]: E0706 23:25:52.325633 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.326945 kubelet[3301]: E0706 23:25:52.326831 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.326945 kubelet[3301]: W0706 23:25:52.326862 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.326945 kubelet[3301]: E0706 23:25:52.326890 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.345539 kubelet[3301]: E0706 23:25:52.345435 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.345539 kubelet[3301]: W0706 23:25:52.345470 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.345539 kubelet[3301]: E0706 23:25:52.345500 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.346974 kubelet[3301]: I0706 23:25:52.346656 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4b45ea5b-9b83-404d-8a3d-f277c8e8af9a-registration-dir\") pod \"csi-node-driver-mhl8x\" (UID: \"4b45ea5b-9b83-404d-8a3d-f277c8e8af9a\") " pod="calico-system/csi-node-driver-mhl8x" Jul 6 23:25:52.347238 kubelet[3301]: E0706 23:25:52.347218 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.347345 kubelet[3301]: W0706 23:25:52.347322 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.347481 kubelet[3301]: E0706 23:25:52.347448 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.348535 kubelet[3301]: E0706 23:25:52.348429 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.348535 kubelet[3301]: W0706 23:25:52.348476 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.348535 kubelet[3301]: E0706 23:25:52.348504 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.349772 kubelet[3301]: E0706 23:25:52.349677 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.349772 kubelet[3301]: W0706 23:25:52.349708 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.349772 kubelet[3301]: E0706 23:25:52.349736 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.350459 kubelet[3301]: I0706 23:25:52.350332 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jklz8\" (UniqueName: \"kubernetes.io/projected/4b45ea5b-9b83-404d-8a3d-f277c8e8af9a-kube-api-access-jklz8\") pod \"csi-node-driver-mhl8x\" (UID: \"4b45ea5b-9b83-404d-8a3d-f277c8e8af9a\") " pod="calico-system/csi-node-driver-mhl8x" Jul 6 23:25:52.351485 kubelet[3301]: E0706 23:25:52.351449 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.352399 kubelet[3301]: W0706 23:25:52.351722 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.352399 kubelet[3301]: E0706 23:25:52.351760 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.353796 kubelet[3301]: E0706 23:25:52.353116 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.354077 kubelet[3301]: W0706 23:25:52.353998 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.354077 kubelet[3301]: E0706 23:25:52.354047 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.356148 kubelet[3301]: E0706 23:25:52.355058 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.356148 kubelet[3301]: W0706 23:25:52.355124 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.356148 kubelet[3301]: E0706 23:25:52.355152 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.356148 kubelet[3301]: I0706 23:25:52.355190 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b45ea5b-9b83-404d-8a3d-f277c8e8af9a-kubelet-dir\") pod \"csi-node-driver-mhl8x\" (UID: \"4b45ea5b-9b83-404d-8a3d-f277c8e8af9a\") " pod="calico-system/csi-node-driver-mhl8x" Jul 6 23:25:52.356787 kubelet[3301]: E0706 23:25:52.356741 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.356928 kubelet[3301]: W0706 23:25:52.356888 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.357002 kubelet[3301]: E0706 23:25:52.356931 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.357002 kubelet[3301]: I0706 23:25:52.356973 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4b45ea5b-9b83-404d-8a3d-f277c8e8af9a-socket-dir\") pod \"csi-node-driver-mhl8x\" (UID: \"4b45ea5b-9b83-404d-8a3d-f277c8e8af9a\") " pod="calico-system/csi-node-driver-mhl8x" Jul 6 23:25:52.358038 containerd[1994]: time="2025-07-06T23:25:52.357937526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jhkjn,Uid:63ac1d45-7f1c-4d88-b834-e3139e767e62,Namespace:calico-system,Attempt:0,}" Jul 6 23:25:52.359930 kubelet[3301]: E0706 23:25:52.359712 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.359930 kubelet[3301]: W0706 23:25:52.359754 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.359930 kubelet[3301]: E0706 23:25:52.359788 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.359930 kubelet[3301]: I0706 23:25:52.359855 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4b45ea5b-9b83-404d-8a3d-f277c8e8af9a-varrun\") pod \"csi-node-driver-mhl8x\" (UID: \"4b45ea5b-9b83-404d-8a3d-f277c8e8af9a\") " pod="calico-system/csi-node-driver-mhl8x" Jul 6 23:25:52.360612 kubelet[3301]: E0706 23:25:52.360506 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.360719 kubelet[3301]: W0706 23:25:52.360639 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.360719 kubelet[3301]: E0706 23:25:52.360673 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.361841 kubelet[3301]: E0706 23:25:52.361712 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.361841 kubelet[3301]: W0706 23:25:52.361758 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.361841 kubelet[3301]: E0706 23:25:52.361792 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.362997 kubelet[3301]: E0706 23:25:52.362942 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.363135 kubelet[3301]: W0706 23:25:52.363090 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.363315 kubelet[3301]: E0706 23:25:52.363276 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.364765 kubelet[3301]: E0706 23:25:52.364551 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.364765 kubelet[3301]: W0706 23:25:52.364753 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.364765 kubelet[3301]: E0706 23:25:52.364789 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.366696 kubelet[3301]: E0706 23:25:52.365945 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.366696 kubelet[3301]: W0706 23:25:52.365971 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.366696 kubelet[3301]: E0706 23:25:52.366000 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.366904 kubelet[3301]: E0706 23:25:52.366851 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.366904 kubelet[3301]: W0706 23:25:52.366877 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.367019 kubelet[3301]: E0706 23:25:52.366905 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.419750 containerd[1994]: time="2025-07-06T23:25:52.418050039Z" level=info msg="connecting to shim 07c377e5e3f86cecbdf6f42520e2f418971b1e65a32e032a851b52e962035454" address="unix:///run/containerd/s/16a63e2f190033d25e8e264990208aa78c2bf142cc4dba46f1deabfcf2b18230" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:25:52.462949 kubelet[3301]: E0706 23:25:52.462676 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.462949 kubelet[3301]: W0706 23:25:52.462751 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.462949 kubelet[3301]: E0706 23:25:52.462787 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.465891 kubelet[3301]: E0706 23:25:52.465853 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.466186 kubelet[3301]: W0706 23:25:52.466044 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.466186 kubelet[3301]: E0706 23:25:52.466088 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.467933 kubelet[3301]: E0706 23:25:52.467878 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.467933 kubelet[3301]: W0706 23:25:52.467922 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.468622 kubelet[3301]: E0706 23:25:52.468088 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.469886 kubelet[3301]: E0706 23:25:52.469739 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.469886 kubelet[3301]: W0706 23:25:52.469781 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.469886 kubelet[3301]: E0706 23:25:52.469814 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.471099 kubelet[3301]: E0706 23:25:52.471035 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.471681 kubelet[3301]: W0706 23:25:52.471625 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.472747 kubelet[3301]: E0706 23:25:52.471713 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.472890 kubelet[3301]: E0706 23:25:52.472802 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.472890 kubelet[3301]: W0706 23:25:52.472832 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.473146 kubelet[3301]: E0706 23:25:52.473105 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.474479 kubelet[3301]: E0706 23:25:52.474422 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.474479 kubelet[3301]: W0706 23:25:52.474461 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.474892 kubelet[3301]: E0706 23:25:52.474495 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.475976 kubelet[3301]: E0706 23:25:52.475924 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.475976 kubelet[3301]: W0706 23:25:52.475963 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.476547 kubelet[3301]: E0706 23:25:52.475996 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.476932 kubelet[3301]: E0706 23:25:52.476732 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.476932 kubelet[3301]: W0706 23:25:52.476754 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.476932 kubelet[3301]: E0706 23:25:52.476779 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.478104 kubelet[3301]: E0706 23:25:52.477708 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.478104 kubelet[3301]: W0706 23:25:52.477732 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.478104 kubelet[3301]: E0706 23:25:52.477761 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.479015 kubelet[3301]: E0706 23:25:52.478704 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.479015 kubelet[3301]: W0706 23:25:52.479003 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.480759 kubelet[3301]: E0706 23:25:52.479037 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.480847 kubelet[3301]: E0706 23:25:52.480793 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.480898 kubelet[3301]: W0706 23:25:52.480861 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.480898 kubelet[3301]: E0706 23:25:52.480892 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.482182 kubelet[3301]: E0706 23:25:52.482124 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.482182 kubelet[3301]: W0706 23:25:52.482162 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.482475 kubelet[3301]: E0706 23:25:52.482195 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.484815 kubelet[3301]: E0706 23:25:52.483332 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.484815 kubelet[3301]: W0706 23:25:52.483370 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.484815 kubelet[3301]: E0706 23:25:52.483401 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.484815 kubelet[3301]: E0706 23:25:52.484678 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.484815 kubelet[3301]: W0706 23:25:52.484725 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.484815 kubelet[3301]: E0706 23:25:52.484757 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.485199 kubelet[3301]: E0706 23:25:52.485091 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.485199 kubelet[3301]: W0706 23:25:52.485107 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.485199 kubelet[3301]: E0706 23:25:52.485126 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.486367 kubelet[3301]: E0706 23:25:52.485963 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.486367 kubelet[3301]: W0706 23:25:52.486000 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.486367 kubelet[3301]: E0706 23:25:52.486031 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.487013 kubelet[3301]: E0706 23:25:52.486964 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.487013 kubelet[3301]: W0706 23:25:52.486998 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.487535 kubelet[3301]: E0706 23:25:52.487228 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.492185 kubelet[3301]: E0706 23:25:52.489338 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.493212 kubelet[3301]: W0706 23:25:52.492934 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.493212 kubelet[3301]: E0706 23:25:52.492978 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.494056 kubelet[3301]: E0706 23:25:52.494023 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.498609 kubelet[3301]: W0706 23:25:52.498553 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.498864 kubelet[3301]: E0706 23:25:52.498837 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.502601 kubelet[3301]: E0706 23:25:52.500853 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.502601 kubelet[3301]: W0706 23:25:52.500887 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.502601 kubelet[3301]: E0706 23:25:52.500920 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.505296 kubelet[3301]: E0706 23:25:52.504755 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.505296 kubelet[3301]: W0706 23:25:52.504793 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.505296 kubelet[3301]: E0706 23:25:52.504825 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.508015 kubelet[3301]: E0706 23:25:52.507770 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.508015 kubelet[3301]: W0706 23:25:52.507804 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.508015 kubelet[3301]: E0706 23:25:52.507840 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.511099 kubelet[3301]: E0706 23:25:52.511057 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.515368 kubelet[3301]: W0706 23:25:52.514047 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.515368 kubelet[3301]: E0706 23:25:52.514098 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.517756 kubelet[3301]: E0706 23:25:52.516124 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.517756 kubelet[3301]: W0706 23:25:52.516156 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.517756 kubelet[3301]: E0706 23:25:52.516205 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.528116 systemd[1]: Started cri-containerd-07c377e5e3f86cecbdf6f42520e2f418971b1e65a32e032a851b52e962035454.scope - libcontainer container 07c377e5e3f86cecbdf6f42520e2f418971b1e65a32e032a851b52e962035454. Jul 6 23:25:52.556594 kubelet[3301]: E0706 23:25:52.556448 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:52.556594 kubelet[3301]: W0706 23:25:52.556484 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:52.556594 kubelet[3301]: E0706 23:25:52.556517 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:52.845789 containerd[1994]: time="2025-07-06T23:25:52.845268413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jhkjn,Uid:63ac1d45-7f1c-4d88-b834-e3139e767e62,Namespace:calico-system,Attempt:0,} returns sandbox id \"07c377e5e3f86cecbdf6f42520e2f418971b1e65a32e032a851b52e962035454\"" Jul 6 23:25:53.511626 kubelet[3301]: E0706 23:25:53.511152 3301 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mhl8x" podUID="4b45ea5b-9b83-404d-8a3d-f277c8e8af9a" Jul 6 23:25:53.979711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2010795147.mount: Deactivated successfully. Jul 6 23:25:55.511210 kubelet[3301]: E0706 23:25:55.510227 3301 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mhl8x" podUID="4b45ea5b-9b83-404d-8a3d-f277c8e8af9a" Jul 6 23:25:55.599924 containerd[1994]: time="2025-07-06T23:25:55.599848218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:55.604490 containerd[1994]: time="2025-07-06T23:25:55.604273218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 6 23:25:55.607980 containerd[1994]: time="2025-07-06T23:25:55.607898754Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:55.616806 containerd[1994]: time="2025-07-06T23:25:55.616729602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:55.621560 containerd[1994]: time="2025-07-06T23:25:55.621481494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 3.383337772s" Jul 6 23:25:55.622033 containerd[1994]: time="2025-07-06T23:25:55.621551538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 6 23:25:55.628722 containerd[1994]: time="2025-07-06T23:25:55.627727495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:25:55.660212 containerd[1994]: time="2025-07-06T23:25:55.660163375Z" level=info msg="CreateContainer within sandbox \"fbb6a9b137e5142af78d0c71e980ab4d0e9ec2a36b91d003eef543ca1587cd09\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:25:55.678624 containerd[1994]: time="2025-07-06T23:25:55.678217591Z" level=info msg="Container 8731577138d7282bb4e9b5fab219401218fb9525bcd998799d8d160b2461f6e5: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:25:55.689893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3663474775.mount: Deactivated successfully. Jul 6 23:25:55.696165 containerd[1994]: time="2025-07-06T23:25:55.696020959Z" level=info msg="CreateContainer within sandbox \"fbb6a9b137e5142af78d0c71e980ab4d0e9ec2a36b91d003eef543ca1587cd09\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8731577138d7282bb4e9b5fab219401218fb9525bcd998799d8d160b2461f6e5\"" Jul 6 23:25:55.698151 containerd[1994]: time="2025-07-06T23:25:55.698027743Z" level=info msg="StartContainer for \"8731577138d7282bb4e9b5fab219401218fb9525bcd998799d8d160b2461f6e5\"" Jul 6 23:25:55.705200 containerd[1994]: time="2025-07-06T23:25:55.704940583Z" level=info msg="connecting to shim 8731577138d7282bb4e9b5fab219401218fb9525bcd998799d8d160b2461f6e5" address="unix:///run/containerd/s/8b156cb8fe1693a4a569f151ec30c69e66eddc7a2407bc7c4ffde18363a56eb0" protocol=ttrpc version=3 Jul 6 23:25:55.754906 systemd[1]: Started cri-containerd-8731577138d7282bb4e9b5fab219401218fb9525bcd998799d8d160b2461f6e5.scope - libcontainer container 8731577138d7282bb4e9b5fab219401218fb9525bcd998799d8d160b2461f6e5. Jul 6 23:25:55.846013 containerd[1994]: time="2025-07-06T23:25:55.844275596Z" level=info msg="StartContainer for \"8731577138d7282bb4e9b5fab219401218fb9525bcd998799d8d160b2461f6e5\" returns successfully" Jul 6 23:25:56.763634 kubelet[3301]: E0706 23:25:56.763360 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.763634 kubelet[3301]: W0706 23:25:56.763398 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.763634 kubelet[3301]: E0706 23:25:56.763428 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.764959 kubelet[3301]: E0706 23:25:56.764696 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.764959 kubelet[3301]: W0706 23:25:56.764725 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.764959 kubelet[3301]: E0706 23:25:56.764798 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.765312 kubelet[3301]: E0706 23:25:56.765290 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.765416 kubelet[3301]: W0706 23:25:56.765395 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.765523 kubelet[3301]: E0706 23:25:56.765503 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.766031 kubelet[3301]: E0706 23:25:56.766008 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.766347 kubelet[3301]: W0706 23:25:56.766139 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.766347 kubelet[3301]: E0706 23:25:56.766172 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.766632 kubelet[3301]: E0706 23:25:56.766602 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.766745 kubelet[3301]: W0706 23:25:56.766723 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.766851 kubelet[3301]: E0706 23:25:56.766830 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.768618 kubelet[3301]: E0706 23:25:56.768330 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.768618 kubelet[3301]: W0706 23:25:56.768378 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.768618 kubelet[3301]: E0706 23:25:56.768424 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.771079 kubelet[3301]: E0706 23:25:56.771005 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.771465 kubelet[3301]: W0706 23:25:56.771176 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.772637 kubelet[3301]: E0706 23:25:56.772411 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.773205 kubelet[3301]: E0706 23:25:56.772954 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.773205 kubelet[3301]: W0706 23:25:56.772981 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.773205 kubelet[3301]: E0706 23:25:56.773006 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.775548 kubelet[3301]: E0706 23:25:56.775510 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.775985 kubelet[3301]: W0706 23:25:56.775631 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.775985 kubelet[3301]: E0706 23:25:56.775666 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.776498 kubelet[3301]: E0706 23:25:56.776472 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.776785 kubelet[3301]: W0706 23:25:56.776682 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.777161 kubelet[3301]: E0706 23:25:56.776721 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.777457 kubelet[3301]: E0706 23:25:56.777433 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.777774 kubelet[3301]: W0706 23:25:56.777550 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.777774 kubelet[3301]: E0706 23:25:56.777619 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.778105 kubelet[3301]: E0706 23:25:56.778083 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.778384 kubelet[3301]: W0706 23:25:56.778199 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.778384 kubelet[3301]: E0706 23:25:56.778227 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.778653 kubelet[3301]: E0706 23:25:56.778633 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.778761 kubelet[3301]: W0706 23:25:56.778741 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.778869 kubelet[3301]: E0706 23:25:56.778848 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.779407 kubelet[3301]: E0706 23:25:56.779222 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.779407 kubelet[3301]: W0706 23:25:56.779242 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.779407 kubelet[3301]: E0706 23:25:56.779262 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.779734 kubelet[3301]: E0706 23:25:56.779713 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.779840 kubelet[3301]: W0706 23:25:56.779819 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.779958 kubelet[3301]: E0706 23:25:56.779935 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.839494 kubelet[3301]: E0706 23:25:56.839436 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.839494 kubelet[3301]: W0706 23:25:56.839474 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.839730 kubelet[3301]: E0706 23:25:56.839504 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.839945 kubelet[3301]: E0706 23:25:56.839902 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.839945 kubelet[3301]: W0706 23:25:56.839930 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.840065 kubelet[3301]: E0706 23:25:56.839953 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.840406 kubelet[3301]: E0706 23:25:56.840364 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.840406 kubelet[3301]: W0706 23:25:56.840392 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.840529 kubelet[3301]: E0706 23:25:56.840415 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.840851 kubelet[3301]: E0706 23:25:56.840822 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.840939 kubelet[3301]: W0706 23:25:56.840849 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.840939 kubelet[3301]: E0706 23:25:56.840870 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.841244 kubelet[3301]: E0706 23:25:56.841216 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.841313 kubelet[3301]: W0706 23:25:56.841241 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.841313 kubelet[3301]: E0706 23:25:56.841261 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.841696 kubelet[3301]: E0706 23:25:56.841666 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.841788 kubelet[3301]: W0706 23:25:56.841697 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.841788 kubelet[3301]: E0706 23:25:56.841718 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.842354 kubelet[3301]: E0706 23:25:56.842323 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.842354 kubelet[3301]: W0706 23:25:56.842350 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.842492 kubelet[3301]: E0706 23:25:56.842375 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.842832 kubelet[3301]: E0706 23:25:56.842803 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.842915 kubelet[3301]: W0706 23:25:56.842830 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.842915 kubelet[3301]: E0706 23:25:56.842852 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.843241 kubelet[3301]: E0706 23:25:56.843213 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.843312 kubelet[3301]: W0706 23:25:56.843239 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.843312 kubelet[3301]: E0706 23:25:56.843263 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.843668 kubelet[3301]: E0706 23:25:56.843641 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.843668 kubelet[3301]: W0706 23:25:56.843666 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.843793 kubelet[3301]: E0706 23:25:56.843686 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.844055 kubelet[3301]: E0706 23:25:56.844028 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.844115 kubelet[3301]: W0706 23:25:56.844052 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.844115 kubelet[3301]: E0706 23:25:56.844073 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.844445 kubelet[3301]: E0706 23:25:56.844417 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.844512 kubelet[3301]: W0706 23:25:56.844443 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.844512 kubelet[3301]: E0706 23:25:56.844462 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.844837 kubelet[3301]: E0706 23:25:56.844810 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.844896 kubelet[3301]: W0706 23:25:56.844835 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.844896 kubelet[3301]: E0706 23:25:56.844856 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.845257 kubelet[3301]: E0706 23:25:56.845214 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.845257 kubelet[3301]: W0706 23:25:56.845241 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.845377 kubelet[3301]: E0706 23:25:56.845263 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.845666 kubelet[3301]: E0706 23:25:56.845639 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.845743 kubelet[3301]: W0706 23:25:56.845665 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.845743 kubelet[3301]: E0706 23:25:56.845686 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.846130 kubelet[3301]: E0706 23:25:56.846098 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.846130 kubelet[3301]: W0706 23:25:56.846127 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.846247 kubelet[3301]: E0706 23:25:56.846150 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.847234 kubelet[3301]: E0706 23:25:56.847167 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.847234 kubelet[3301]: W0706 23:25:56.847209 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.847234 kubelet[3301]: E0706 23:25:56.847238 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:56.847716 kubelet[3301]: E0706 23:25:56.847686 3301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:25:56.847782 kubelet[3301]: W0706 23:25:56.847716 3301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:25:56.847782 kubelet[3301]: E0706 23:25:56.847739 3301 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:25:57.091667 containerd[1994]: time="2025-07-06T23:25:57.090659394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:57.093690 containerd[1994]: time="2025-07-06T23:25:57.093561402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 6 23:25:57.095941 containerd[1994]: time="2025-07-06T23:25:57.095826078Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:57.100216 containerd[1994]: time="2025-07-06T23:25:57.100094022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:57.102344 containerd[1994]: time="2025-07-06T23:25:57.101819934Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.474019347s" Jul 6 23:25:57.102344 containerd[1994]: time="2025-07-06T23:25:57.101884266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 6 23:25:57.112594 containerd[1994]: time="2025-07-06T23:25:57.112379238Z" level=info msg="CreateContainer within sandbox \"07c377e5e3f86cecbdf6f42520e2f418971b1e65a32e032a851b52e962035454\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:25:57.132228 containerd[1994]: time="2025-07-06T23:25:57.130848438Z" level=info msg="Container d0f9e2bb052754411a49bfb909a33dc268ad31f73d71147bd108c4fcbeb873cb: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:25:57.146355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount570140116.mount: Deactivated successfully. Jul 6 23:25:57.153160 containerd[1994]: time="2025-07-06T23:25:57.153001050Z" level=info msg="CreateContainer within sandbox \"07c377e5e3f86cecbdf6f42520e2f418971b1e65a32e032a851b52e962035454\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d0f9e2bb052754411a49bfb909a33dc268ad31f73d71147bd108c4fcbeb873cb\"" Jul 6 23:25:57.154496 containerd[1994]: time="2025-07-06T23:25:57.154360170Z" level=info msg="StartContainer for \"d0f9e2bb052754411a49bfb909a33dc268ad31f73d71147bd108c4fcbeb873cb\"" Jul 6 23:25:57.159484 containerd[1994]: time="2025-07-06T23:25:57.159338754Z" level=info msg="connecting to shim d0f9e2bb052754411a49bfb909a33dc268ad31f73d71147bd108c4fcbeb873cb" address="unix:///run/containerd/s/16a63e2f190033d25e8e264990208aa78c2bf142cc4dba46f1deabfcf2b18230" protocol=ttrpc version=3 Jul 6 23:25:57.203894 systemd[1]: Started cri-containerd-d0f9e2bb052754411a49bfb909a33dc268ad31f73d71147bd108c4fcbeb873cb.scope - libcontainer container d0f9e2bb052754411a49bfb909a33dc268ad31f73d71147bd108c4fcbeb873cb. Jul 6 23:25:57.289841 containerd[1994]: time="2025-07-06T23:25:57.289731271Z" level=info msg="StartContainer for \"d0f9e2bb052754411a49bfb909a33dc268ad31f73d71147bd108c4fcbeb873cb\" returns successfully" Jul 6 23:25:57.312730 systemd[1]: cri-containerd-d0f9e2bb052754411a49bfb909a33dc268ad31f73d71147bd108c4fcbeb873cb.scope: Deactivated successfully. Jul 6 23:25:57.320598 containerd[1994]: time="2025-07-06T23:25:57.320458195Z" level=info msg="received exit event container_id:\"d0f9e2bb052754411a49bfb909a33dc268ad31f73d71147bd108c4fcbeb873cb\" id:\"d0f9e2bb052754411a49bfb909a33dc268ad31f73d71147bd108c4fcbeb873cb\" pid:4164 exited_at:{seconds:1751844357 nanos:319977007}" Jul 6 23:25:57.320598 containerd[1994]: time="2025-07-06T23:25:57.320536207Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0f9e2bb052754411a49bfb909a33dc268ad31f73d71147bd108c4fcbeb873cb\" id:\"d0f9e2bb052754411a49bfb909a33dc268ad31f73d71147bd108c4fcbeb873cb\" pid:4164 exited_at:{seconds:1751844357 nanos:319977007}" Jul 6 23:25:57.365968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0f9e2bb052754411a49bfb909a33dc268ad31f73d71147bd108c4fcbeb873cb-rootfs.mount: Deactivated successfully. Jul 6 23:25:57.511533 kubelet[3301]: E0706 23:25:57.511115 3301 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mhl8x" podUID="4b45ea5b-9b83-404d-8a3d-f277c8e8af9a" Jul 6 23:25:57.757737 kubelet[3301]: I0706 23:25:57.757381 3301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:25:57.763095 containerd[1994]: time="2025-07-06T23:25:57.763037385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:25:57.818382 kubelet[3301]: I0706 23:25:57.817451 3301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cc589755-sn5r5" podStartSLOduration=3.4301069330000002 podStartE2EDuration="6.817426149s" podCreationTimestamp="2025-07-06 23:25:51 +0000 UTC" firstStartedPulling="2025-07-06 23:25:52.23708285 +0000 UTC m=+30.014787282" lastFinishedPulling="2025-07-06 23:25:55.624402042 +0000 UTC m=+33.402106498" observedRunningTime="2025-07-06 23:25:56.773697956 +0000 UTC m=+34.551402424" watchObservedRunningTime="2025-07-06 23:25:57.817426149 +0000 UTC m=+35.595130593" Jul 6 23:25:59.510988 kubelet[3301]: E0706 23:25:59.510932 3301 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mhl8x" podUID="4b45ea5b-9b83-404d-8a3d-f277c8e8af9a" Jul 6 23:26:01.510805 kubelet[3301]: E0706 23:26:01.510748 3301 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mhl8x" podUID="4b45ea5b-9b83-404d-8a3d-f277c8e8af9a" Jul 6 23:26:01.647685 containerd[1994]: time="2025-07-06T23:26:01.647081988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:01.649165 containerd[1994]: time="2025-07-06T23:26:01.649110900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 6 23:26:01.651623 containerd[1994]: time="2025-07-06T23:26:01.651564876Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:01.655940 containerd[1994]: time="2025-07-06T23:26:01.655876608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:01.657383 containerd[1994]: time="2025-07-06T23:26:01.657326148Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.894228079s" Jul 6 23:26:01.657540 containerd[1994]: time="2025-07-06T23:26:01.657512472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 6 23:26:01.667773 containerd[1994]: time="2025-07-06T23:26:01.667703269Z" level=info msg="CreateContainer within sandbox \"07c377e5e3f86cecbdf6f42520e2f418971b1e65a32e032a851b52e962035454\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:26:01.688984 containerd[1994]: time="2025-07-06T23:26:01.688911565Z" level=info msg="Container 711725fb5c394c24d7721843296c3cf182164bccff25b4280ad797db5debaea7: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:26:01.711164 containerd[1994]: time="2025-07-06T23:26:01.711113365Z" level=info msg="CreateContainer within sandbox \"07c377e5e3f86cecbdf6f42520e2f418971b1e65a32e032a851b52e962035454\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"711725fb5c394c24d7721843296c3cf182164bccff25b4280ad797db5debaea7\"" Jul 6 23:26:01.712836 containerd[1994]: time="2025-07-06T23:26:01.712743769Z" level=info msg="StartContainer for \"711725fb5c394c24d7721843296c3cf182164bccff25b4280ad797db5debaea7\"" Jul 6 23:26:01.717944 containerd[1994]: time="2025-07-06T23:26:01.717878737Z" level=info msg="connecting to shim 711725fb5c394c24d7721843296c3cf182164bccff25b4280ad797db5debaea7" address="unix:///run/containerd/s/16a63e2f190033d25e8e264990208aa78c2bf142cc4dba46f1deabfcf2b18230" protocol=ttrpc version=3 Jul 6 23:26:01.751900 systemd[1]: Started cri-containerd-711725fb5c394c24d7721843296c3cf182164bccff25b4280ad797db5debaea7.scope - libcontainer container 711725fb5c394c24d7721843296c3cf182164bccff25b4280ad797db5debaea7. Jul 6 23:26:01.845305 containerd[1994]: time="2025-07-06T23:26:01.845128549Z" level=info msg="StartContainer for \"711725fb5c394c24d7721843296c3cf182164bccff25b4280ad797db5debaea7\" returns successfully" Jul 6 23:26:02.817002 containerd[1994]: time="2025-07-06T23:26:02.816934142Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:26:02.823756 systemd[1]: cri-containerd-711725fb5c394c24d7721843296c3cf182164bccff25b4280ad797db5debaea7.scope: Deactivated successfully. Jul 6 23:26:02.824313 systemd[1]: cri-containerd-711725fb5c394c24d7721843296c3cf182164bccff25b4280ad797db5debaea7.scope: Consumed 930ms CPU time, 185.8M memory peak, 165.8M written to disk. Jul 6 23:26:02.838431 containerd[1994]: time="2025-07-06T23:26:02.838345526Z" level=info msg="TaskExit event in podsandbox handler container_id:\"711725fb5c394c24d7721843296c3cf182164bccff25b4280ad797db5debaea7\" id:\"711725fb5c394c24d7721843296c3cf182164bccff25b4280ad797db5debaea7\" pid:4225 exited_at:{seconds:1751844362 nanos:836552954}" Jul 6 23:26:02.838952 containerd[1994]: time="2025-07-06T23:26:02.838476338Z" level=info msg="received exit event container_id:\"711725fb5c394c24d7721843296c3cf182164bccff25b4280ad797db5debaea7\" id:\"711725fb5c394c24d7721843296c3cf182164bccff25b4280ad797db5debaea7\" pid:4225 exited_at:{seconds:1751844362 nanos:836552954}" Jul 6 23:26:02.857818 kubelet[3301]: I0706 23:26:02.856046 3301 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:26:02.897245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-711725fb5c394c24d7721843296c3cf182164bccff25b4280ad797db5debaea7-rootfs.mount: Deactivated successfully. Jul 6 23:26:03.070922 systemd[1]: Created slice kubepods-besteffort-pod20bf9a29_649a_451b_86e1_85b50a04f25b.slice - libcontainer container kubepods-besteffort-pod20bf9a29_649a_451b_86e1_85b50a04f25b.slice. Jul 6 23:26:03.099263 kubelet[3301]: I0706 23:26:03.099202 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dgdb\" (UniqueName: \"kubernetes.io/projected/cf658958-f190-4039-905e-8d2608b3af70-kube-api-access-4dgdb\") pod \"coredns-674b8bbfcf-x28nd\" (UID: \"cf658958-f190-4039-905e-8d2608b3af70\") " pod="kube-system/coredns-674b8bbfcf-x28nd" Jul 6 23:26:03.099468 kubelet[3301]: I0706 23:26:03.099304 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf658958-f190-4039-905e-8d2608b3af70-config-volume\") pod \"coredns-674b8bbfcf-x28nd\" (UID: \"cf658958-f190-4039-905e-8d2608b3af70\") " pod="kube-system/coredns-674b8bbfcf-x28nd" Jul 6 23:26:03.123653 systemd[1]: Created slice kubepods-burstable-podcf658958_f190_4039_905e_8d2608b3af70.slice - libcontainer container kubepods-burstable-podcf658958_f190_4039_905e_8d2608b3af70.slice. Jul 6 23:26:03.200630 kubelet[3301]: I0706 23:26:03.199838 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20bf9a29-649a-451b-86e1-85b50a04f25b-whisker-ca-bundle\") pod \"whisker-fc44f9ff-plctw\" (UID: \"20bf9a29-649a-451b-86e1-85b50a04f25b\") " pod="calico-system/whisker-fc44f9ff-plctw" Jul 6 23:26:03.201425 kubelet[3301]: I0706 23:26:03.201309 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/143d9a0e-7ac2-4d7b-89b4-af04bcbfb2e8-calico-apiserver-certs\") pod \"calico-apiserver-749ffbdbfc-fxkxv\" (UID: \"143d9a0e-7ac2-4d7b-89b4-af04bcbfb2e8\") " pod="calico-apiserver/calico-apiserver-749ffbdbfc-fxkxv" Jul 6 23:26:03.201425 kubelet[3301]: I0706 23:26:03.201391 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fmmh\" (UniqueName: \"kubernetes.io/projected/143d9a0e-7ac2-4d7b-89b4-af04bcbfb2e8-kube-api-access-6fmmh\") pod \"calico-apiserver-749ffbdbfc-fxkxv\" (UID: \"143d9a0e-7ac2-4d7b-89b4-af04bcbfb2e8\") " pod="calico-apiserver/calico-apiserver-749ffbdbfc-fxkxv" Jul 6 23:26:03.203356 kubelet[3301]: I0706 23:26:03.202821 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/20bf9a29-649a-451b-86e1-85b50a04f25b-whisker-backend-key-pair\") pod \"whisker-fc44f9ff-plctw\" (UID: \"20bf9a29-649a-451b-86e1-85b50a04f25b\") " pod="calico-system/whisker-fc44f9ff-plctw" Jul 6 23:26:03.205014 kubelet[3301]: I0706 23:26:03.204156 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gjxw\" (UniqueName: \"kubernetes.io/projected/20bf9a29-649a-451b-86e1-85b50a04f25b-kube-api-access-5gjxw\") pod \"whisker-fc44f9ff-plctw\" (UID: \"20bf9a29-649a-451b-86e1-85b50a04f25b\") " pod="calico-system/whisker-fc44f9ff-plctw" Jul 6 23:26:03.210498 systemd[1]: Created slice kubepods-besteffort-pod143d9a0e_7ac2_4d7b_89b4_af04bcbfb2e8.slice - libcontainer container kubepods-besteffort-pod143d9a0e_7ac2_4d7b_89b4_af04bcbfb2e8.slice. Jul 6 23:26:03.235142 systemd[1]: Created slice kubepods-besteffort-podf38d912d_4d66_4f1b_9595_521101a042ac.slice - libcontainer container kubepods-besteffort-podf38d912d_4d66_4f1b_9595_521101a042ac.slice. Jul 6 23:26:03.287054 systemd[1]: Created slice kubepods-besteffort-pod28e5776a_359b_488d_822c_afbf214fd771.slice - libcontainer container kubepods-besteffort-pod28e5776a_359b_488d_822c_afbf214fd771.slice. Jul 6 23:26:03.307066 kubelet[3301]: I0706 23:26:03.306993 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f38d912d-4d66-4f1b-9595-521101a042ac-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-mb4p7\" (UID: \"f38d912d-4d66-4f1b-9595-521101a042ac\") " pod="calico-system/goldmane-768f4c5c69-mb4p7" Jul 6 23:26:03.331256 kubelet[3301]: I0706 23:26:03.307524 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f38d912d-4d66-4f1b-9595-521101a042ac-goldmane-key-pair\") pod \"goldmane-768f4c5c69-mb4p7\" (UID: \"f38d912d-4d66-4f1b-9595-521101a042ac\") " pod="calico-system/goldmane-768f4c5c69-mb4p7" Jul 6 23:26:03.331256 kubelet[3301]: I0706 23:26:03.307743 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38d912d-4d66-4f1b-9595-521101a042ac-config\") pod \"goldmane-768f4c5c69-mb4p7\" (UID: \"f38d912d-4d66-4f1b-9595-521101a042ac\") " pod="calico-system/goldmane-768f4c5c69-mb4p7" Jul 6 23:26:03.331256 kubelet[3301]: I0706 23:26:03.307860 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wsqf\" (UniqueName: \"kubernetes.io/projected/f38d912d-4d66-4f1b-9595-521101a042ac-kube-api-access-5wsqf\") pod \"goldmane-768f4c5c69-mb4p7\" (UID: \"f38d912d-4d66-4f1b-9595-521101a042ac\") " pod="calico-system/goldmane-768f4c5c69-mb4p7" Jul 6 23:26:03.405625 containerd[1994]: time="2025-07-06T23:26:03.405270733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fc44f9ff-plctw,Uid:20bf9a29-649a-451b-86e1-85b50a04f25b,Namespace:calico-system,Attempt:0,}" Jul 6 23:26:03.408725 kubelet[3301]: I0706 23:26:03.408649 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5qjd\" (UniqueName: \"kubernetes.io/projected/28e5776a-359b-488d-822c-afbf214fd771-kube-api-access-d5qjd\") pod \"calico-kube-controllers-7bd9d598d6-9mrmm\" (UID: \"28e5776a-359b-488d-822c-afbf214fd771\") " pod="calico-system/calico-kube-controllers-7bd9d598d6-9mrmm" Jul 6 23:26:03.409248 kubelet[3301]: I0706 23:26:03.409031 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28e5776a-359b-488d-822c-afbf214fd771-tigera-ca-bundle\") pod \"calico-kube-controllers-7bd9d598d6-9mrmm\" (UID: \"28e5776a-359b-488d-822c-afbf214fd771\") " pod="calico-system/calico-kube-controllers-7bd9d598d6-9mrmm" Jul 6 23:26:03.438921 containerd[1994]: time="2025-07-06T23:26:03.438508717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x28nd,Uid:cf658958-f190-4039-905e-8d2608b3af70,Namespace:kube-system,Attempt:0,}" Jul 6 23:26:03.442642 systemd[1]: Created slice kubepods-burstable-pod5a713856_581b_4c37_b03c_ba0db6757d38.slice - libcontainer container kubepods-burstable-pod5a713856_581b_4c37_b03c_ba0db6757d38.slice. Jul 6 23:26:03.463617 systemd[1]: Created slice kubepods-besteffort-pod5eec331f_8576_4c50_84c1_9316f986b7f0.slice - libcontainer container kubepods-besteffort-pod5eec331f_8576_4c50_84c1_9316f986b7f0.slice. Jul 6 23:26:03.511602 kubelet[3301]: I0706 23:26:03.510945 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwxpq\" (UniqueName: \"kubernetes.io/projected/5a713856-581b-4c37-b03c-ba0db6757d38-kube-api-access-gwxpq\") pod \"coredns-674b8bbfcf-s2z7k\" (UID: \"5a713856-581b-4c37-b03c-ba0db6757d38\") " pod="kube-system/coredns-674b8bbfcf-s2z7k" Jul 6 23:26:03.511602 kubelet[3301]: I0706 23:26:03.511015 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbc54\" (UniqueName: \"kubernetes.io/projected/5eec331f-8576-4c50-84c1-9316f986b7f0-kube-api-access-qbc54\") pod \"calico-apiserver-749ffbdbfc-th6v2\" (UID: \"5eec331f-8576-4c50-84c1-9316f986b7f0\") " pod="calico-apiserver/calico-apiserver-749ffbdbfc-th6v2" Jul 6 23:26:03.511602 kubelet[3301]: I0706 23:26:03.511076 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a713856-581b-4c37-b03c-ba0db6757d38-config-volume\") pod \"coredns-674b8bbfcf-s2z7k\" (UID: \"5a713856-581b-4c37-b03c-ba0db6757d38\") " pod="kube-system/coredns-674b8bbfcf-s2z7k" Jul 6 23:26:03.511602 kubelet[3301]: I0706 23:26:03.511139 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5eec331f-8576-4c50-84c1-9316f986b7f0-calico-apiserver-certs\") pod \"calico-apiserver-749ffbdbfc-th6v2\" (UID: \"5eec331f-8576-4c50-84c1-9316f986b7f0\") " pod="calico-apiserver/calico-apiserver-749ffbdbfc-th6v2" Jul 6 23:26:03.532733 systemd[1]: Created slice kubepods-besteffort-pod4b45ea5b_9b83_404d_8a3d_f277c8e8af9a.slice - libcontainer container kubepods-besteffort-pod4b45ea5b_9b83_404d_8a3d_f277c8e8af9a.slice. Jul 6 23:26:03.548731 containerd[1994]: time="2025-07-06T23:26:03.548323118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mhl8x,Uid:4b45ea5b-9b83-404d-8a3d-f277c8e8af9a,Namespace:calico-system,Attempt:0,}" Jul 6 23:26:03.549354 containerd[1994]: time="2025-07-06T23:26:03.549207338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749ffbdbfc-fxkxv,Uid:143d9a0e-7ac2-4d7b-89b4-af04bcbfb2e8,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:26:03.555759 containerd[1994]: time="2025-07-06T23:26:03.554533202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-mb4p7,Uid:f38d912d-4d66-4f1b-9595-521101a042ac,Namespace:calico-system,Attempt:0,}" Jul 6 23:26:03.609667 containerd[1994]: time="2025-07-06T23:26:03.608658998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd9d598d6-9mrmm,Uid:28e5776a-359b-488d-822c-afbf214fd771,Namespace:calico-system,Attempt:0,}" Jul 6 23:26:03.731370 containerd[1994]: time="2025-07-06T23:26:03.731295927Z" level=error msg="Failed to destroy network for sandbox \"d230cdd09a9317669f4171f603f8850ca4e24ac30425ac08d39970003027f4ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:03.737604 containerd[1994]: time="2025-07-06T23:26:03.735599619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fc44f9ff-plctw,Uid:20bf9a29-649a-451b-86e1-85b50a04f25b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d230cdd09a9317669f4171f603f8850ca4e24ac30425ac08d39970003027f4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:03.737821 kubelet[3301]: E0706 23:26:03.735910 3301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d230cdd09a9317669f4171f603f8850ca4e24ac30425ac08d39970003027f4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:03.737821 kubelet[3301]: E0706 23:26:03.735997 3301 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d230cdd09a9317669f4171f603f8850ca4e24ac30425ac08d39970003027f4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fc44f9ff-plctw" Jul 6 23:26:03.737821 kubelet[3301]: E0706 23:26:03.736039 3301 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d230cdd09a9317669f4171f603f8850ca4e24ac30425ac08d39970003027f4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fc44f9ff-plctw" Jul 6 23:26:03.738010 kubelet[3301]: E0706 23:26:03.736131 3301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-fc44f9ff-plctw_calico-system(20bf9a29-649a-451b-86e1-85b50a04f25b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-fc44f9ff-plctw_calico-system(20bf9a29-649a-451b-86e1-85b50a04f25b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d230cdd09a9317669f4171f603f8850ca4e24ac30425ac08d39970003027f4ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-fc44f9ff-plctw" podUID="20bf9a29-649a-451b-86e1-85b50a04f25b" Jul 6 23:26:03.757141 containerd[1994]: time="2025-07-06T23:26:03.757059051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s2z7k,Uid:5a713856-581b-4c37-b03c-ba0db6757d38,Namespace:kube-system,Attempt:0,}" Jul 6 23:26:03.783552 containerd[1994]: time="2025-07-06T23:26:03.783238275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749ffbdbfc-th6v2,Uid:5eec331f-8576-4c50-84c1-9316f986b7f0,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:26:03.849352 containerd[1994]: time="2025-07-06T23:26:03.845120331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:26:04.003679 containerd[1994]: time="2025-07-06T23:26:04.003597252Z" level=error msg="Failed to destroy network for sandbox \"225b411207df85ba368de6c0cf7fbfc451a81fb1bd0181811f1d35f471998aed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.010203 systemd[1]: run-netns-cni\x2d28ad7997\x2d45b6\x2d459b\x2d1543\x2ddfe3b63d6f00.mount: Deactivated successfully. Jul 6 23:26:04.018203 containerd[1994]: time="2025-07-06T23:26:04.018087552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x28nd,Uid:cf658958-f190-4039-905e-8d2608b3af70,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"225b411207df85ba368de6c0cf7fbfc451a81fb1bd0181811f1d35f471998aed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.018797 kubelet[3301]: E0706 23:26:04.018401 3301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"225b411207df85ba368de6c0cf7fbfc451a81fb1bd0181811f1d35f471998aed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.018797 kubelet[3301]: E0706 23:26:04.018483 3301 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"225b411207df85ba368de6c0cf7fbfc451a81fb1bd0181811f1d35f471998aed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x28nd" Jul 6 23:26:04.018797 kubelet[3301]: E0706 23:26:04.018516 3301 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"225b411207df85ba368de6c0cf7fbfc451a81fb1bd0181811f1d35f471998aed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x28nd" Jul 6 23:26:04.022555 kubelet[3301]: E0706 23:26:04.019802 3301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-x28nd_kube-system(cf658958-f190-4039-905e-8d2608b3af70)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-x28nd_kube-system(cf658958-f190-4039-905e-8d2608b3af70)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"225b411207df85ba368de6c0cf7fbfc451a81fb1bd0181811f1d35f471998aed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-x28nd" podUID="cf658958-f190-4039-905e-8d2608b3af70" Jul 6 23:26:04.061962 containerd[1994]: time="2025-07-06T23:26:04.061811400Z" level=error msg="Failed to destroy network for sandbox \"90bb8df903c7de61fc5261fc07bc3c28e6e418af59a5a513f4ebd0555d1f9dba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.067997 systemd[1]: run-netns-cni\x2d65db0c57\x2d413e\x2df175\x2d0d84\x2d8f2c81136359.mount: Deactivated successfully. Jul 6 23:26:04.077226 containerd[1994]: time="2025-07-06T23:26:04.077127012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749ffbdbfc-fxkxv,Uid:143d9a0e-7ac2-4d7b-89b4-af04bcbfb2e8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90bb8df903c7de61fc5261fc07bc3c28e6e418af59a5a513f4ebd0555d1f9dba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.077916 kubelet[3301]: E0706 23:26:04.077858 3301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90bb8df903c7de61fc5261fc07bc3c28e6e418af59a5a513f4ebd0555d1f9dba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.079875 kubelet[3301]: E0706 23:26:04.078234 3301 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90bb8df903c7de61fc5261fc07bc3c28e6e418af59a5a513f4ebd0555d1f9dba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-749ffbdbfc-fxkxv" Jul 6 23:26:04.079875 kubelet[3301]: E0706 23:26:04.078287 3301 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90bb8df903c7de61fc5261fc07bc3c28e6e418af59a5a513f4ebd0555d1f9dba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-749ffbdbfc-fxkxv" Jul 6 23:26:04.079875 kubelet[3301]: E0706 23:26:04.078381 3301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-749ffbdbfc-fxkxv_calico-apiserver(143d9a0e-7ac2-4d7b-89b4-af04bcbfb2e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-749ffbdbfc-fxkxv_calico-apiserver(143d9a0e-7ac2-4d7b-89b4-af04bcbfb2e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90bb8df903c7de61fc5261fc07bc3c28e6e418af59a5a513f4ebd0555d1f9dba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-749ffbdbfc-fxkxv" podUID="143d9a0e-7ac2-4d7b-89b4-af04bcbfb2e8" Jul 6 23:26:04.109749 containerd[1994]: time="2025-07-06T23:26:04.109616605Z" level=error msg="Failed to destroy network for sandbox \"436d76f74db0fd875e89b96011626470fd3cec7df717c6703a9eab59c28fa654\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.116369 systemd[1]: run-netns-cni\x2d16a823d0\x2db772\x2d04aa\x2d4cc2\x2df7e74fd6494c.mount: Deactivated successfully. Jul 6 23:26:04.119280 containerd[1994]: time="2025-07-06T23:26:04.118893925Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mhl8x,Uid:4b45ea5b-9b83-404d-8a3d-f277c8e8af9a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"436d76f74db0fd875e89b96011626470fd3cec7df717c6703a9eab59c28fa654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.120002 kubelet[3301]: E0706 23:26:04.119949 3301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"436d76f74db0fd875e89b96011626470fd3cec7df717c6703a9eab59c28fa654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.120560 kubelet[3301]: E0706 23:26:04.120192 3301 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"436d76f74db0fd875e89b96011626470fd3cec7df717c6703a9eab59c28fa654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mhl8x" Jul 6 23:26:04.120560 kubelet[3301]: E0706 23:26:04.120233 3301 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"436d76f74db0fd875e89b96011626470fd3cec7df717c6703a9eab59c28fa654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mhl8x" Jul 6 23:26:04.120560 kubelet[3301]: E0706 23:26:04.120324 3301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mhl8x_calico-system(4b45ea5b-9b83-404d-8a3d-f277c8e8af9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mhl8x_calico-system(4b45ea5b-9b83-404d-8a3d-f277c8e8af9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"436d76f74db0fd875e89b96011626470fd3cec7df717c6703a9eab59c28fa654\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mhl8x" podUID="4b45ea5b-9b83-404d-8a3d-f277c8e8af9a" Jul 6 23:26:04.146627 containerd[1994]: time="2025-07-06T23:26:04.143714065Z" level=error msg="Failed to destroy network for sandbox \"ac80ac285e833bdceaac109193ad79a72cb692372fee4efcc22326683d152f76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.149008 systemd[1]: run-netns-cni\x2d2053ddce\x2db4bf\x2db438\x2d78ed\x2de868f5ed331b.mount: Deactivated successfully. Jul 6 23:26:04.156310 containerd[1994]: time="2025-07-06T23:26:04.156221725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd9d598d6-9mrmm,Uid:28e5776a-359b-488d-822c-afbf214fd771,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac80ac285e833bdceaac109193ad79a72cb692372fee4efcc22326683d152f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.157263 kubelet[3301]: E0706 23:26:04.157210 3301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac80ac285e833bdceaac109193ad79a72cb692372fee4efcc22326683d152f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.157519 kubelet[3301]: E0706 23:26:04.157481 3301 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac80ac285e833bdceaac109193ad79a72cb692372fee4efcc22326683d152f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bd9d598d6-9mrmm" Jul 6 23:26:04.157889 kubelet[3301]: E0706 23:26:04.157842 3301 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac80ac285e833bdceaac109193ad79a72cb692372fee4efcc22326683d152f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bd9d598d6-9mrmm" Jul 6 23:26:04.159627 kubelet[3301]: E0706 23:26:04.158089 3301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bd9d598d6-9mrmm_calico-system(28e5776a-359b-488d-822c-afbf214fd771)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bd9d598d6-9mrmm_calico-system(28e5776a-359b-488d-822c-afbf214fd771)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac80ac285e833bdceaac109193ad79a72cb692372fee4efcc22326683d152f76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bd9d598d6-9mrmm" podUID="28e5776a-359b-488d-822c-afbf214fd771" Jul 6 23:26:04.161271 containerd[1994]: time="2025-07-06T23:26:04.160165921Z" level=error msg="Failed to destroy network for sandbox \"c595b996035489b33de3499ca918df9c80d762853705a44e29a538d03a72f681\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.165801 containerd[1994]: time="2025-07-06T23:26:04.165734809Z" level=error msg="Failed to destroy network for sandbox \"e96b5a9d2bb322013454e04b46d9785a1459c505950dabacc00e385f03eb6a90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.168096 containerd[1994]: time="2025-07-06T23:26:04.167947729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-mb4p7,Uid:f38d912d-4d66-4f1b-9595-521101a042ac,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c595b996035489b33de3499ca918df9c80d762853705a44e29a538d03a72f681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.169185 kubelet[3301]: E0706 23:26:04.168295 3301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c595b996035489b33de3499ca918df9c80d762853705a44e29a538d03a72f681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.169185 kubelet[3301]: E0706 23:26:04.168668 3301 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c595b996035489b33de3499ca918df9c80d762853705a44e29a538d03a72f681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-mb4p7" Jul 6 23:26:04.169185 kubelet[3301]: E0706 23:26:04.168703 3301 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c595b996035489b33de3499ca918df9c80d762853705a44e29a538d03a72f681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-mb4p7" Jul 6 23:26:04.169443 kubelet[3301]: E0706 23:26:04.168792 3301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-mb4p7_calico-system(f38d912d-4d66-4f1b-9595-521101a042ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-mb4p7_calico-system(f38d912d-4d66-4f1b-9595-521101a042ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c595b996035489b33de3499ca918df9c80d762853705a44e29a538d03a72f681\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-mb4p7" podUID="f38d912d-4d66-4f1b-9595-521101a042ac" Jul 6 23:26:04.172756 containerd[1994]: time="2025-07-06T23:26:04.172682149Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s2z7k,Uid:5a713856-581b-4c37-b03c-ba0db6757d38,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e96b5a9d2bb322013454e04b46d9785a1459c505950dabacc00e385f03eb6a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.173657 kubelet[3301]: E0706 23:26:04.173421 3301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e96b5a9d2bb322013454e04b46d9785a1459c505950dabacc00e385f03eb6a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.173817 kubelet[3301]: E0706 23:26:04.173689 3301 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e96b5a9d2bb322013454e04b46d9785a1459c505950dabacc00e385f03eb6a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-s2z7k" Jul 6 23:26:04.173817 kubelet[3301]: E0706 23:26:04.173734 3301 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e96b5a9d2bb322013454e04b46d9785a1459c505950dabacc00e385f03eb6a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-s2z7k" Jul 6 23:26:04.173944 kubelet[3301]: E0706 23:26:04.173816 3301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-s2z7k_kube-system(5a713856-581b-4c37-b03c-ba0db6757d38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-s2z7k_kube-system(5a713856-581b-4c37-b03c-ba0db6757d38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e96b5a9d2bb322013454e04b46d9785a1459c505950dabacc00e385f03eb6a90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-s2z7k" podUID="5a713856-581b-4c37-b03c-ba0db6757d38" Jul 6 23:26:04.192560 containerd[1994]: time="2025-07-06T23:26:04.192491641Z" level=error msg="Failed to destroy network for sandbox \"a6517f218ea58c27c3c2d3009ac95c8f4b06e75e8948b43f713effbd62c9fd20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.195229 containerd[1994]: time="2025-07-06T23:26:04.195162757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749ffbdbfc-th6v2,Uid:5eec331f-8576-4c50-84c1-9316f986b7f0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6517f218ea58c27c3c2d3009ac95c8f4b06e75e8948b43f713effbd62c9fd20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.195550 kubelet[3301]: E0706 23:26:04.195492 3301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6517f218ea58c27c3c2d3009ac95c8f4b06e75e8948b43f713effbd62c9fd20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:26:04.195794 kubelet[3301]: E0706 23:26:04.195709 3301 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6517f218ea58c27c3c2d3009ac95c8f4b06e75e8948b43f713effbd62c9fd20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-749ffbdbfc-th6v2" Jul 6 23:26:04.195794 kubelet[3301]: E0706 23:26:04.195766 3301 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6517f218ea58c27c3c2d3009ac95c8f4b06e75e8948b43f713effbd62c9fd20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-749ffbdbfc-th6v2" Jul 6 23:26:04.195965 kubelet[3301]: E0706 23:26:04.195864 3301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-749ffbdbfc-th6v2_calico-apiserver(5eec331f-8576-4c50-84c1-9316f986b7f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-749ffbdbfc-th6v2_calico-apiserver(5eec331f-8576-4c50-84c1-9316f986b7f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6517f218ea58c27c3c2d3009ac95c8f4b06e75e8948b43f713effbd62c9fd20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-749ffbdbfc-th6v2" podUID="5eec331f-8576-4c50-84c1-9316f986b7f0" Jul 6 23:26:04.897177 systemd[1]: run-netns-cni\x2d97e77080\x2d804c\x2d5c53\x2da663\x2dae693eeea563.mount: Deactivated successfully. Jul 6 23:26:04.897355 systemd[1]: run-netns-cni\x2d421fe75e\x2d1588\x2dc772\x2d0fde\x2d3e3aae288a0d.mount: Deactivated successfully. Jul 6 23:26:04.897479 systemd[1]: run-netns-cni\x2de0d6bd68\x2de3c2\x2d0597\x2db3f1\x2d1949dbaa2a32.mount: Deactivated successfully. Jul 6 23:26:10.588607 kubelet[3301]: I0706 23:26:10.587651 3301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:26:12.220540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2260954629.mount: Deactivated successfully. Jul 6 23:26:12.280421 containerd[1994]: time="2025-07-06T23:26:12.280329825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:12.282187 containerd[1994]: time="2025-07-06T23:26:12.282125997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 6 23:26:12.284783 containerd[1994]: time="2025-07-06T23:26:12.284697033Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:12.290151 containerd[1994]: time="2025-07-06T23:26:12.290084889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:12.292108 containerd[1994]: time="2025-07-06T23:26:12.291945393Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 8.44508817s" Jul 6 23:26:12.292108 containerd[1994]: time="2025-07-06T23:26:12.292004925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 6 23:26:12.340493 containerd[1994]: time="2025-07-06T23:26:12.340407946Z" level=info msg="CreateContainer within sandbox \"07c377e5e3f86cecbdf6f42520e2f418971b1e65a32e032a851b52e962035454\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:26:12.362113 containerd[1994]: time="2025-07-06T23:26:12.362034898Z" level=info msg="Container fca8f6215dcf77b287ab4332b600545820b917f25002d888153d5f5aa57f16e3: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:26:12.384351 containerd[1994]: time="2025-07-06T23:26:12.384224902Z" level=info msg="CreateContainer within sandbox \"07c377e5e3f86cecbdf6f42520e2f418971b1e65a32e032a851b52e962035454\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fca8f6215dcf77b287ab4332b600545820b917f25002d888153d5f5aa57f16e3\"" Jul 6 23:26:12.386003 containerd[1994]: time="2025-07-06T23:26:12.385936834Z" level=info msg="StartContainer for \"fca8f6215dcf77b287ab4332b600545820b917f25002d888153d5f5aa57f16e3\"" Jul 6 23:26:12.389550 containerd[1994]: time="2025-07-06T23:26:12.389410846Z" level=info msg="connecting to shim fca8f6215dcf77b287ab4332b600545820b917f25002d888153d5f5aa57f16e3" address="unix:///run/containerd/s/16a63e2f190033d25e8e264990208aa78c2bf142cc4dba46f1deabfcf2b18230" protocol=ttrpc version=3 Jul 6 23:26:12.472231 systemd[1]: Started cri-containerd-fca8f6215dcf77b287ab4332b600545820b917f25002d888153d5f5aa57f16e3.scope - libcontainer container fca8f6215dcf77b287ab4332b600545820b917f25002d888153d5f5aa57f16e3. Jul 6 23:26:12.560847 containerd[1994]: time="2025-07-06T23:26:12.560699999Z" level=info msg="StartContainer for \"fca8f6215dcf77b287ab4332b600545820b917f25002d888153d5f5aa57f16e3\" returns successfully" Jul 6 23:26:12.817653 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:26:12.817833 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:26:12.905471 kubelet[3301]: I0706 23:26:12.905374 3301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jhkjn" podStartSLOduration=2.473686876 podStartE2EDuration="21.905343288s" podCreationTimestamp="2025-07-06 23:25:51 +0000 UTC" firstStartedPulling="2025-07-06 23:25:52.861827237 +0000 UTC m=+30.639531669" lastFinishedPulling="2025-07-06 23:26:12.293483637 +0000 UTC m=+50.071188081" observedRunningTime="2025-07-06 23:26:12.90283884 +0000 UTC m=+50.680543308" watchObservedRunningTime="2025-07-06 23:26:12.905343288 +0000 UTC m=+50.683047732" Jul 6 23:26:13.194686 kubelet[3301]: I0706 23:26:13.194181 3301 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20bf9a29-649a-451b-86e1-85b50a04f25b-whisker-ca-bundle\") pod \"20bf9a29-649a-451b-86e1-85b50a04f25b\" (UID: \"20bf9a29-649a-451b-86e1-85b50a04f25b\") " Jul 6 23:26:13.194686 kubelet[3301]: I0706 23:26:13.194248 3301 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/20bf9a29-649a-451b-86e1-85b50a04f25b-whisker-backend-key-pair\") pod \"20bf9a29-649a-451b-86e1-85b50a04f25b\" (UID: \"20bf9a29-649a-451b-86e1-85b50a04f25b\") " Jul 6 23:26:13.194686 kubelet[3301]: I0706 23:26:13.194301 3301 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gjxw\" (UniqueName: \"kubernetes.io/projected/20bf9a29-649a-451b-86e1-85b50a04f25b-kube-api-access-5gjxw\") pod \"20bf9a29-649a-451b-86e1-85b50a04f25b\" (UID: \"20bf9a29-649a-451b-86e1-85b50a04f25b\") " Jul 6 23:26:13.196254 kubelet[3301]: I0706 23:26:13.196194 3301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20bf9a29-649a-451b-86e1-85b50a04f25b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "20bf9a29-649a-451b-86e1-85b50a04f25b" (UID: "20bf9a29-649a-451b-86e1-85b50a04f25b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:26:13.205379 kubelet[3301]: I0706 23:26:13.204705 3301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20bf9a29-649a-451b-86e1-85b50a04f25b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "20bf9a29-649a-451b-86e1-85b50a04f25b" (UID: "20bf9a29-649a-451b-86e1-85b50a04f25b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:26:13.208675 kubelet[3301]: I0706 23:26:13.208509 3301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20bf9a29-649a-451b-86e1-85b50a04f25b-kube-api-access-5gjxw" (OuterVolumeSpecName: "kube-api-access-5gjxw") pod "20bf9a29-649a-451b-86e1-85b50a04f25b" (UID: "20bf9a29-649a-451b-86e1-85b50a04f25b"). InnerVolumeSpecName "kube-api-access-5gjxw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:26:13.219138 systemd[1]: var-lib-kubelet-pods-20bf9a29\x2d649a\x2d451b\x2d86e1\x2d85b50a04f25b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:26:13.219348 systemd[1]: var-lib-kubelet-pods-20bf9a29\x2d649a\x2d451b\x2d86e1\x2d85b50a04f25b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5gjxw.mount: Deactivated successfully. Jul 6 23:26:13.295541 kubelet[3301]: I0706 23:26:13.294807 3301 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20bf9a29-649a-451b-86e1-85b50a04f25b-whisker-ca-bundle\") on node \"ip-172-31-21-233\" DevicePath \"\"" Jul 6 23:26:13.295541 kubelet[3301]: I0706 23:26:13.294867 3301 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/20bf9a29-649a-451b-86e1-85b50a04f25b-whisker-backend-key-pair\") on node \"ip-172-31-21-233\" DevicePath \"\"" Jul 6 23:26:13.295541 kubelet[3301]: I0706 23:26:13.294892 3301 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5gjxw\" (UniqueName: \"kubernetes.io/projected/20bf9a29-649a-451b-86e1-85b50a04f25b-kube-api-access-5gjxw\") on node \"ip-172-31-21-233\" DevicePath \"\"" Jul 6 23:26:13.872420 kubelet[3301]: I0706 23:26:13.872346 3301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:26:13.883352 systemd[1]: Removed slice kubepods-besteffort-pod20bf9a29_649a_451b_86e1_85b50a04f25b.slice - libcontainer container kubepods-besteffort-pod20bf9a29_649a_451b_86e1_85b50a04f25b.slice. Jul 6 23:26:14.015903 systemd[1]: Created slice kubepods-besteffort-pod47c1f265_024f_4e76_b801_5d3d817aeae0.slice - libcontainer container kubepods-besteffort-pod47c1f265_024f_4e76_b801_5d3d817aeae0.slice. Jul 6 23:26:14.103026 kubelet[3301]: I0706 23:26:14.102961 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47c1f265-024f-4e76-b801-5d3d817aeae0-whisker-backend-key-pair\") pod \"whisker-7b69cbc649-7qrk5\" (UID: \"47c1f265-024f-4e76-b801-5d3d817aeae0\") " pod="calico-system/whisker-7b69cbc649-7qrk5" Jul 6 23:26:14.103621 kubelet[3301]: I0706 23:26:14.103086 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47c1f265-024f-4e76-b801-5d3d817aeae0-whisker-ca-bundle\") pod \"whisker-7b69cbc649-7qrk5\" (UID: \"47c1f265-024f-4e76-b801-5d3d817aeae0\") " pod="calico-system/whisker-7b69cbc649-7qrk5" Jul 6 23:26:14.103621 kubelet[3301]: I0706 23:26:14.103179 3301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnrwl\" (UniqueName: \"kubernetes.io/projected/47c1f265-024f-4e76-b801-5d3d817aeae0-kube-api-access-mnrwl\") pod \"whisker-7b69cbc649-7qrk5\" (UID: \"47c1f265-024f-4e76-b801-5d3d817aeae0\") " pod="calico-system/whisker-7b69cbc649-7qrk5" Jul 6 23:26:14.324052 containerd[1994]: time="2025-07-06T23:26:14.323936903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b69cbc649-7qrk5,Uid:47c1f265-024f-4e76-b801-5d3d817aeae0,Namespace:calico-system,Attempt:0,}" Jul 6 23:26:14.516819 kubelet[3301]: I0706 23:26:14.516764 3301 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20bf9a29-649a-451b-86e1-85b50a04f25b" path="/var/lib/kubelet/pods/20bf9a29-649a-451b-86e1-85b50a04f25b/volumes" Jul 6 23:26:14.688288 (udev-worker)[4527]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:26:14.690144 systemd-networkd[1877]: cali3629fe64442: Link UP Jul 6 23:26:14.692164 systemd-networkd[1877]: cali3629fe64442: Gained carrier Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.380 [INFO][4555] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.471 [INFO][4555] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--233-k8s-whisker--7b69cbc649--7qrk5-eth0 whisker-7b69cbc649- calico-system 47c1f265-024f-4e76-b801-5d3d817aeae0 920 0 2025-07-06 23:26:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b69cbc649 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-21-233 whisker-7b69cbc649-7qrk5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3629fe64442 [] [] }} ContainerID="7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" Namespace="calico-system" Pod="whisker-7b69cbc649-7qrk5" WorkloadEndpoint="ip--172--31--21--233-k8s-whisker--7b69cbc649--7qrk5-" Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.471 [INFO][4555] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" Namespace="calico-system" Pod="whisker-7b69cbc649-7qrk5" WorkloadEndpoint="ip--172--31--21--233-k8s-whisker--7b69cbc649--7qrk5-eth0" Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.562 [INFO][4566] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" HandleID="k8s-pod-network.7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" Workload="ip--172--31--21--233-k8s-whisker--7b69cbc649--7qrk5-eth0" Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.562 [INFO][4566] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" HandleID="k8s-pod-network.7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" Workload="ip--172--31--21--233-k8s-whisker--7b69cbc649--7qrk5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ac390), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-233", "pod":"whisker-7b69cbc649-7qrk5", "timestamp":"2025-07-06 23:26:14.561976237 +0000 UTC"}, Hostname:"ip-172-31-21-233", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.562 [INFO][4566] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.562 [INFO][4566] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.562 [INFO][4566] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-233' Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.605 [INFO][4566] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" host="ip-172-31-21-233" Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.617 [INFO][4566] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-233" Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.628 [INFO][4566] ipam/ipam.go 511: Trying affinity for 192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.632 [INFO][4566] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.636 [INFO][4566] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.636 [INFO][4566] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" host="ip-172-31-21-233" Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.639 [INFO][4566] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0 Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.647 [INFO][4566] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" host="ip-172-31-21-233" Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.657 [INFO][4566] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.1/26] block=192.168.107.0/26 handle="k8s-pod-network.7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" host="ip-172-31-21-233" Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.658 [INFO][4566] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.1/26] handle="k8s-pod-network.7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" host="ip-172-31-21-233" Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.658 [INFO][4566] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:26:14.737528 containerd[1994]: 2025-07-06 23:26:14.658 [INFO][4566] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.1/26] IPv6=[] ContainerID="7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" HandleID="k8s-pod-network.7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" Workload="ip--172--31--21--233-k8s-whisker--7b69cbc649--7qrk5-eth0" Jul 6 23:26:14.739944 containerd[1994]: 2025-07-06 23:26:14.669 [INFO][4555] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" Namespace="calico-system" Pod="whisker-7b69cbc649-7qrk5" WorkloadEndpoint="ip--172--31--21--233-k8s-whisker--7b69cbc649--7qrk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--233-k8s-whisker--7b69cbc649--7qrk5-eth0", GenerateName:"whisker-7b69cbc649-", Namespace:"calico-system", SelfLink:"", UID:"47c1f265-024f-4e76-b801-5d3d817aeae0", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 26, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b69cbc649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-233", ContainerID:"", Pod:"whisker-7b69cbc649-7qrk5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.107.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3629fe64442", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:26:14.739944 containerd[1994]: 2025-07-06 23:26:14.669 [INFO][4555] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.1/32] ContainerID="7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" Namespace="calico-system" Pod="whisker-7b69cbc649-7qrk5" WorkloadEndpoint="ip--172--31--21--233-k8s-whisker--7b69cbc649--7qrk5-eth0" Jul 6 23:26:14.739944 containerd[1994]: 2025-07-06 23:26:14.670 [INFO][4555] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3629fe64442 ContainerID="7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" Namespace="calico-system" Pod="whisker-7b69cbc649-7qrk5" WorkloadEndpoint="ip--172--31--21--233-k8s-whisker--7b69cbc649--7qrk5-eth0" Jul 6 23:26:14.739944 containerd[1994]: 2025-07-06 23:26:14.696 [INFO][4555] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" Namespace="calico-system" Pod="whisker-7b69cbc649-7qrk5" WorkloadEndpoint="ip--172--31--21--233-k8s-whisker--7b69cbc649--7qrk5-eth0" Jul 6 23:26:14.739944 containerd[1994]: 2025-07-06 23:26:14.698 [INFO][4555] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" Namespace="calico-system" Pod="whisker-7b69cbc649-7qrk5" WorkloadEndpoint="ip--172--31--21--233-k8s-whisker--7b69cbc649--7qrk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--233-k8s-whisker--7b69cbc649--7qrk5-eth0", GenerateName:"whisker-7b69cbc649-", Namespace:"calico-system", SelfLink:"", UID:"47c1f265-024f-4e76-b801-5d3d817aeae0", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 26, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b69cbc649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-233", ContainerID:"7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0", Pod:"whisker-7b69cbc649-7qrk5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.107.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3629fe64442", MAC:"72:0a:d2:09:96:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:26:14.739944 containerd[1994]: 2025-07-06 23:26:14.727 [INFO][4555] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" Namespace="calico-system" Pod="whisker-7b69cbc649-7qrk5" WorkloadEndpoint="ip--172--31--21--233-k8s-whisker--7b69cbc649--7qrk5-eth0" Jul 6 23:26:14.860433 containerd[1994]: time="2025-07-06T23:26:14.860188742Z" level=info msg="connecting to shim 7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0" address="unix:///run/containerd/s/9e7053c88eaf3e5fafeb2e5dc13cd0ec9c1228a1a2110babc541c1d27df474a4" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:26:14.973162 systemd[1]: Started cri-containerd-7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0.scope - libcontainer container 7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0. Jul 6 23:26:15.273115 containerd[1994]: time="2025-07-06T23:26:15.272619660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b69cbc649-7qrk5,Uid:47c1f265-024f-4e76-b801-5d3d817aeae0,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0\"" Jul 6 23:26:15.278608 containerd[1994]: time="2025-07-06T23:26:15.278522424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:26:15.511955 containerd[1994]: time="2025-07-06T23:26:15.511537861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749ffbdbfc-th6v2,Uid:5eec331f-8576-4c50-84c1-9316f986b7f0,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:26:15.515172 containerd[1994]: time="2025-07-06T23:26:15.515104909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749ffbdbfc-fxkxv,Uid:143d9a0e-7ac2-4d7b-89b4-af04bcbfb2e8,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:26:15.526758 containerd[1994]: time="2025-07-06T23:26:15.524176429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd9d598d6-9mrmm,Uid:28e5776a-359b-488d-822c-afbf214fd771,Namespace:calico-system,Attempt:0,}" Jul 6 23:26:16.161686 systemd-networkd[1877]: cali53fba45dba9: Link UP Jul 6 23:26:16.162151 systemd-networkd[1877]: cali53fba45dba9: Gained carrier Jul 6 23:26:16.169435 (udev-worker)[4525]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:15.745 [INFO][4725] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:15.815 [INFO][4725] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--fxkxv-eth0 calico-apiserver-749ffbdbfc- calico-apiserver 143d9a0e-7ac2-4d7b-89b4-af04bcbfb2e8 840 0 2025-07-06 23:25:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:749ffbdbfc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-233 calico-apiserver-749ffbdbfc-fxkxv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali53fba45dba9 [] [] }} ContainerID="501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" Namespace="calico-apiserver" Pod="calico-apiserver-749ffbdbfc-fxkxv" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--fxkxv-" Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:15.815 [INFO][4725] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" Namespace="calico-apiserver" Pod="calico-apiserver-749ffbdbfc-fxkxv" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--fxkxv-eth0" Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:15.974 [INFO][4759] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" HandleID="k8s-pod-network.501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" Workload="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--fxkxv-eth0" Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:15.975 [INFO][4759] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" HandleID="k8s-pod-network.501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" Workload="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--fxkxv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103e00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-233", "pod":"calico-apiserver-749ffbdbfc-fxkxv", "timestamp":"2025-07-06 23:26:15.973973188 +0000 UTC"}, Hostname:"ip-172-31-21-233", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:15.975 [INFO][4759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:15.975 [INFO][4759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:15.976 [INFO][4759] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-233' Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:16.039 [INFO][4759] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" host="ip-172-31-21-233" Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:16.059 [INFO][4759] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-233" Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:16.083 [INFO][4759] ipam/ipam.go 511: Trying affinity for 192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:16.095 [INFO][4759] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:16.113 [INFO][4759] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:16.114 [INFO][4759] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" host="ip-172-31-21-233" Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:16.118 [INFO][4759] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35 Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:16.133 [INFO][4759] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" host="ip-172-31-21-233" Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:16.143 [INFO][4759] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.2/26] block=192.168.107.0/26 handle="k8s-pod-network.501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" host="ip-172-31-21-233" Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:16.143 [INFO][4759] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.2/26] handle="k8s-pod-network.501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" host="ip-172-31-21-233" Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:16.143 [INFO][4759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:26:16.261611 containerd[1994]: 2025-07-06 23:26:16.143 [INFO][4759] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.2/26] IPv6=[] ContainerID="501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" HandleID="k8s-pod-network.501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" Workload="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--fxkxv-eth0" Jul 6 23:26:16.264202 containerd[1994]: 2025-07-06 23:26:16.151 [INFO][4725] cni-plugin/k8s.go 418: Populated endpoint ContainerID="501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" Namespace="calico-apiserver" Pod="calico-apiserver-749ffbdbfc-fxkxv" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--fxkxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--fxkxv-eth0", GenerateName:"calico-apiserver-749ffbdbfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"143d9a0e-7ac2-4d7b-89b4-af04bcbfb2e8", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"749ffbdbfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-233", ContainerID:"", Pod:"calico-apiserver-749ffbdbfc-fxkxv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali53fba45dba9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:26:16.264202 containerd[1994]: 2025-07-06 23:26:16.151 [INFO][4725] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.2/32] ContainerID="501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" Namespace="calico-apiserver" Pod="calico-apiserver-749ffbdbfc-fxkxv" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--fxkxv-eth0" Jul 6 23:26:16.264202 containerd[1994]: 2025-07-06 23:26:16.151 [INFO][4725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53fba45dba9 ContainerID="501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" Namespace="calico-apiserver" Pod="calico-apiserver-749ffbdbfc-fxkxv" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--fxkxv-eth0" Jul 6 23:26:16.264202 containerd[1994]: 2025-07-06 23:26:16.166 [INFO][4725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" Namespace="calico-apiserver" Pod="calico-apiserver-749ffbdbfc-fxkxv" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--fxkxv-eth0" Jul 6 23:26:16.264202 containerd[1994]: 2025-07-06 23:26:16.168 [INFO][4725] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" Namespace="calico-apiserver" Pod="calico-apiserver-749ffbdbfc-fxkxv" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--fxkxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--fxkxv-eth0", GenerateName:"calico-apiserver-749ffbdbfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"143d9a0e-7ac2-4d7b-89b4-af04bcbfb2e8", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"749ffbdbfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-233", ContainerID:"501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35", Pod:"calico-apiserver-749ffbdbfc-fxkxv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali53fba45dba9", MAC:"52:9e:fc:0e:2c:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:26:16.264202 containerd[1994]: 2025-07-06 23:26:16.256 [INFO][4725] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" Namespace="calico-apiserver" Pod="calico-apiserver-749ffbdbfc-fxkxv" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--fxkxv-eth0" Jul 6 23:26:16.332544 containerd[1994]: time="2025-07-06T23:26:16.332484865Z" level=info msg="connecting to shim 501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35" address="unix:///run/containerd/s/e4a7f4b9913b01be17886acc08c1135aa8028d1804776bffb76f41ac43063706" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:26:16.421355 systemd[1]: Started cri-containerd-501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35.scope - libcontainer container 501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35. Jul 6 23:26:16.481766 systemd-networkd[1877]: cali3629fe64442: Gained IPv6LL Jul 6 23:26:16.515111 containerd[1994]: time="2025-07-06T23:26:16.514714934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-mb4p7,Uid:f38d912d-4d66-4f1b-9595-521101a042ac,Namespace:calico-system,Attempt:0,}" Jul 6 23:26:16.517949 containerd[1994]: time="2025-07-06T23:26:16.517876838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x28nd,Uid:cf658958-f190-4039-905e-8d2608b3af70,Namespace:kube-system,Attempt:0,}" Jul 6 23:26:16.582957 systemd-networkd[1877]: cali686417e4867: Link UP Jul 6 23:26:16.586195 systemd-networkd[1877]: cali686417e4867: Gained carrier Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:15.708 [INFO][4716] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:15.790 [INFO][4716] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--th6v2-eth0 calico-apiserver-749ffbdbfc- calico-apiserver 5eec331f-8576-4c50-84c1-9316f986b7f0 845 0 2025-07-06 23:25:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:749ffbdbfc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-233 calico-apiserver-749ffbdbfc-th6v2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali686417e4867 [] [] }} ContainerID="04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" Namespace="calico-apiserver" Pod="calico-apiserver-749ffbdbfc-th6v2" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--th6v2-" Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:15.790 [INFO][4716] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" Namespace="calico-apiserver" Pod="calico-apiserver-749ffbdbfc-th6v2" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--th6v2-eth0" Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.016 [INFO][4751] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" HandleID="k8s-pod-network.04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" Workload="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--th6v2-eth0" Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.016 [INFO][4751] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" HandleID="k8s-pod-network.04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" Workload="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--th6v2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000356190), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-233", "pod":"calico-apiserver-749ffbdbfc-th6v2", "timestamp":"2025-07-06 23:26:16.016134684 +0000 UTC"}, Hostname:"ip-172-31-21-233", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.016 [INFO][4751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.143 [INFO][4751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.144 [INFO][4751] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-233' Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.253 [INFO][4751] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" host="ip-172-31-21-233" Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.344 [INFO][4751] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-233" Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.457 [INFO][4751] ipam/ipam.go 511: Trying affinity for 192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.477 [INFO][4751] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.486 [INFO][4751] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.486 [INFO][4751] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" host="ip-172-31-21-233" Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.500 [INFO][4751] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3 Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.531 [INFO][4751] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" host="ip-172-31-21-233" Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.562 [INFO][4751] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.3/26] block=192.168.107.0/26 handle="k8s-pod-network.04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" host="ip-172-31-21-233" Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.562 [INFO][4751] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.3/26] handle="k8s-pod-network.04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" host="ip-172-31-21-233" Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.562 [INFO][4751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:26:16.660682 containerd[1994]: 2025-07-06 23:26:16.562 [INFO][4751] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.3/26] IPv6=[] ContainerID="04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" HandleID="k8s-pod-network.04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" Workload="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--th6v2-eth0" Jul 6 23:26:16.664292 containerd[1994]: 2025-07-06 23:26:16.568 [INFO][4716] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" Namespace="calico-apiserver" Pod="calico-apiserver-749ffbdbfc-th6v2" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--th6v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--th6v2-eth0", GenerateName:"calico-apiserver-749ffbdbfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"5eec331f-8576-4c50-84c1-9316f986b7f0", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"749ffbdbfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-233", ContainerID:"", Pod:"calico-apiserver-749ffbdbfc-th6v2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali686417e4867", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:26:16.664292 containerd[1994]: 2025-07-06 23:26:16.568 [INFO][4716] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.3/32] ContainerID="04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" Namespace="calico-apiserver" Pod="calico-apiserver-749ffbdbfc-th6v2" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--th6v2-eth0" Jul 6 23:26:16.664292 containerd[1994]: 2025-07-06 23:26:16.568 [INFO][4716] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali686417e4867 ContainerID="04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" Namespace="calico-apiserver" Pod="calico-apiserver-749ffbdbfc-th6v2" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--th6v2-eth0" Jul 6 23:26:16.664292 containerd[1994]: 2025-07-06 23:26:16.588 [INFO][4716] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" Namespace="calico-apiserver" Pod="calico-apiserver-749ffbdbfc-th6v2" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--th6v2-eth0" Jul 6 23:26:16.664292 containerd[1994]: 2025-07-06 23:26:16.591 [INFO][4716] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" Namespace="calico-apiserver" Pod="calico-apiserver-749ffbdbfc-th6v2" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--th6v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--th6v2-eth0", GenerateName:"calico-apiserver-749ffbdbfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"5eec331f-8576-4c50-84c1-9316f986b7f0", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"749ffbdbfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-233", ContainerID:"04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3", Pod:"calico-apiserver-749ffbdbfc-th6v2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali686417e4867", MAC:"46:e6:8f:97:f3:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:26:16.664292 containerd[1994]: 2025-07-06 23:26:16.654 [INFO][4716] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" Namespace="calico-apiserver" Pod="calico-apiserver-749ffbdbfc-th6v2" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--apiserver--749ffbdbfc--th6v2-eth0" Jul 6 23:26:16.809059 containerd[1994]: time="2025-07-06T23:26:16.808892632Z" level=info msg="connecting to shim 04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3" address="unix:///run/containerd/s/21f23d1e547a2403e85efae595f0362dd6b1f195cbaa7350cbe871a31422d9d1" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:26:16.873272 systemd-networkd[1877]: calife9cb6711f9: Link UP Jul 6 23:26:16.878527 systemd-networkd[1877]: calife9cb6711f9: Gained carrier Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:15.781 [INFO][4734] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:15.855 [INFO][4734] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--233-k8s-calico--kube--controllers--7bd9d598d6--9mrmm-eth0 calico-kube-controllers-7bd9d598d6- calico-system 28e5776a-359b-488d-822c-afbf214fd771 843 0 2025-07-06 23:25:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bd9d598d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-21-233 calico-kube-controllers-7bd9d598d6-9mrmm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calife9cb6711f9 [] [] }} ContainerID="5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" Namespace="calico-system" Pod="calico-kube-controllers-7bd9d598d6-9mrmm" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--kube--controllers--7bd9d598d6--9mrmm-" Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:15.858 [INFO][4734] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" Namespace="calico-system" Pod="calico-kube-controllers-7bd9d598d6-9mrmm" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--kube--controllers--7bd9d598d6--9mrmm-eth0" Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.049 [INFO][4764] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" HandleID="k8s-pod-network.5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" Workload="ip--172--31--21--233-k8s-calico--kube--controllers--7bd9d598d6--9mrmm-eth0" Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.049 [INFO][4764] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" HandleID="k8s-pod-network.5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" Workload="ip--172--31--21--233-k8s-calico--kube--controllers--7bd9d598d6--9mrmm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400036d5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-233", "pod":"calico-kube-controllers-7bd9d598d6-9mrmm", "timestamp":"2025-07-06 23:26:16.048712056 +0000 UTC"}, Hostname:"ip-172-31-21-233", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.049 [INFO][4764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.562 [INFO][4764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.563 [INFO][4764] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-233' Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.640 [INFO][4764] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" host="ip-172-31-21-233" Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.659 [INFO][4764] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-233" Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.683 [INFO][4764] ipam/ipam.go 511: Trying affinity for 192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.694 [INFO][4764] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.710 [INFO][4764] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.711 [INFO][4764] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" host="ip-172-31-21-233" Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.715 [INFO][4764] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9 Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.726 [INFO][4764] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" host="ip-172-31-21-233" Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.776 [INFO][4764] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.4/26] block=192.168.107.0/26 handle="k8s-pod-network.5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" host="ip-172-31-21-233" Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.777 [INFO][4764] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.4/26] handle="k8s-pod-network.5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" host="ip-172-31-21-233" Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.779 [INFO][4764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:26:16.940196 containerd[1994]: 2025-07-06 23:26:16.779 [INFO][4764] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.4/26] IPv6=[] ContainerID="5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" HandleID="k8s-pod-network.5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" Workload="ip--172--31--21--233-k8s-calico--kube--controllers--7bd9d598d6--9mrmm-eth0" Jul 6 23:26:16.941421 containerd[1994]: 2025-07-06 23:26:16.814 [INFO][4734] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" Namespace="calico-system" Pod="calico-kube-controllers-7bd9d598d6-9mrmm" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--kube--controllers--7bd9d598d6--9mrmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--233-k8s-calico--kube--controllers--7bd9d598d6--9mrmm-eth0", GenerateName:"calico-kube-controllers-7bd9d598d6-", Namespace:"calico-system", SelfLink:"", UID:"28e5776a-359b-488d-822c-afbf214fd771", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bd9d598d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-233", ContainerID:"", Pod:"calico-kube-controllers-7bd9d598d6-9mrmm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calife9cb6711f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:26:16.941421 containerd[1994]: 2025-07-06 23:26:16.815 [INFO][4734] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.4/32] ContainerID="5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" Namespace="calico-system" Pod="calico-kube-controllers-7bd9d598d6-9mrmm" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--kube--controllers--7bd9d598d6--9mrmm-eth0" Jul 6 23:26:16.941421 containerd[1994]: 2025-07-06 23:26:16.815 [INFO][4734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife9cb6711f9 ContainerID="5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" Namespace="calico-system" Pod="calico-kube-controllers-7bd9d598d6-9mrmm" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--kube--controllers--7bd9d598d6--9mrmm-eth0" Jul 6 23:26:16.941421 containerd[1994]: 2025-07-06 23:26:16.883 [INFO][4734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" Namespace="calico-system" Pod="calico-kube-controllers-7bd9d598d6-9mrmm" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--kube--controllers--7bd9d598d6--9mrmm-eth0" Jul 6 23:26:16.941421 containerd[1994]: 2025-07-06 23:26:16.884 [INFO][4734] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" Namespace="calico-system" Pod="calico-kube-controllers-7bd9d598d6-9mrmm" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--kube--controllers--7bd9d598d6--9mrmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--233-k8s-calico--kube--controllers--7bd9d598d6--9mrmm-eth0", GenerateName:"calico-kube-controllers-7bd9d598d6-", Namespace:"calico-system", SelfLink:"", UID:"28e5776a-359b-488d-822c-afbf214fd771", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bd9d598d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-233", ContainerID:"5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9", Pod:"calico-kube-controllers-7bd9d598d6-9mrmm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calife9cb6711f9", MAC:"b2:84:84:fa:f4:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:26:16.941421 containerd[1994]: 2025-07-06 23:26:16.907 [INFO][4734] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" Namespace="calico-system" Pod="calico-kube-controllers-7bd9d598d6-9mrmm" WorkloadEndpoint="ip--172--31--21--233-k8s-calico--kube--controllers--7bd9d598d6--9mrmm-eth0" Jul 6 23:26:16.941919 systemd[1]: Started cri-containerd-04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3.scope - libcontainer container 04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3. Jul 6 23:26:17.160031 systemd[1]: Started sshd@7-172.31.21.233:22-147.75.109.163:59680.service - OpenSSH per-connection server daemon (147.75.109.163:59680). Jul 6 23:26:17.183790 containerd[1994]: time="2025-07-06T23:26:17.182978834Z" level=info msg="connecting to shim 5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9" address="unix:///run/containerd/s/7dd7998817cda6fed995efe043922d0a2b950dc6267f319148e5ead0da0ad941" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:26:17.442695 systemd-networkd[1877]: cali53fba45dba9: Gained IPv6LL Jul 6 23:26:17.500536 sshd[4925]: Accepted publickey for core from 147.75.109.163 port 59680 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:26:17.504541 sshd-session[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:26:17.523691 containerd[1994]: time="2025-07-06T23:26:17.521962023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mhl8x,Uid:4b45ea5b-9b83-404d-8a3d-f277c8e8af9a,Namespace:calico-system,Attempt:0,}" Jul 6 23:26:17.530687 systemd-logind[1972]: New session 8 of user core. Jul 6 23:26:17.558004 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:26:17.617376 containerd[1994]: time="2025-07-06T23:26:17.617135272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749ffbdbfc-fxkxv,Uid:143d9a0e-7ac2-4d7b-89b4-af04bcbfb2e8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35\"" Jul 6 23:26:17.717497 systemd[1]: Started cri-containerd-5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9.scope - libcontainer container 5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9. Jul 6 23:26:17.905649 containerd[1994]: time="2025-07-06T23:26:17.904371353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:17.919778 containerd[1994]: time="2025-07-06T23:26:17.919652369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749ffbdbfc-th6v2,Uid:5eec331f-8576-4c50-84c1-9316f986b7f0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3\"" Jul 6 23:26:17.934262 containerd[1994]: time="2025-07-06T23:26:17.933503033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 6 23:26:17.952781 containerd[1994]: time="2025-07-06T23:26:17.950826557Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:18.025732 containerd[1994]: time="2025-07-06T23:26:18.024992882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:18.032096 containerd[1994]: time="2025-07-06T23:26:18.027278690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 2.748521858s" Jul 6 23:26:18.033667 containerd[1994]: time="2025-07-06T23:26:18.032389142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 6 23:26:18.047348 containerd[1994]: time="2025-07-06T23:26:18.047106446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:26:18.074808 containerd[1994]: time="2025-07-06T23:26:18.074384366Z" level=info msg="CreateContainer within sandbox \"7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:26:18.146356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2206917787.mount: Deactivated successfully. Jul 6 23:26:18.149338 sshd[4965]: Connection closed by 147.75.109.163 port 59680 Jul 6 23:26:18.154851 sshd-session[4925]: pam_unix(sshd:session): session closed for user core Jul 6 23:26:18.177376 systemd[1]: sshd@7-172.31.21.233:22-147.75.109.163:59680.service: Deactivated successfully. Jul 6 23:26:18.187999 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:26:18.193245 systemd-logind[1972]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:26:18.198138 systemd-logind[1972]: Removed session 8. Jul 6 23:26:18.245721 containerd[1994]: time="2025-07-06T23:26:18.245660103Z" level=info msg="Container b5082889313e4240a23a2eccfb61369919f43752c8305b4d1f5aabd741aa57af: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:26:18.289414 containerd[1994]: time="2025-07-06T23:26:18.288446391Z" level=info msg="CreateContainer within sandbox \"7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b5082889313e4240a23a2eccfb61369919f43752c8305b4d1f5aabd741aa57af\"" Jul 6 23:26:18.293616 containerd[1994]: time="2025-07-06T23:26:18.292838919Z" level=info msg="StartContainer for \"b5082889313e4240a23a2eccfb61369919f43752c8305b4d1f5aabd741aa57af\"" Jul 6 23:26:18.298238 containerd[1994]: time="2025-07-06T23:26:18.298128531Z" level=info msg="connecting to shim b5082889313e4240a23a2eccfb61369919f43752c8305b4d1f5aabd741aa57af" address="unix:///run/containerd/s/9e7053c88eaf3e5fafeb2e5dc13cd0ec9c1228a1a2110babc541c1d27df474a4" protocol=ttrpc version=3 Jul 6 23:26:18.375799 systemd-networkd[1877]: calif641e2fea16: Link UP Jul 6 23:26:18.386434 systemd-networkd[1877]: calif641e2fea16: Gained carrier Jul 6 23:26:18.404350 systemd[1]: Started cri-containerd-b5082889313e4240a23a2eccfb61369919f43752c8305b4d1f5aabd741aa57af.scope - libcontainer container b5082889313e4240a23a2eccfb61369919f43752c8305b4d1f5aabd741aa57af. Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:17.240 [INFO][4889] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:17.400 [INFO][4889] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--233-k8s-goldmane--768f4c5c69--mb4p7-eth0 goldmane-768f4c5c69- calico-system f38d912d-4d66-4f1b-9595-521101a042ac 842 0 2025-07-06 23:25:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-21-233 goldmane-768f4c5c69-mb4p7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif641e2fea16 [] [] }} ContainerID="69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" Namespace="calico-system" Pod="goldmane-768f4c5c69-mb4p7" WorkloadEndpoint="ip--172--31--21--233-k8s-goldmane--768f4c5c69--mb4p7-" Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:17.400 [INFO][4889] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" Namespace="calico-system" Pod="goldmane-768f4c5c69-mb4p7" WorkloadEndpoint="ip--172--31--21--233-k8s-goldmane--768f4c5c69--mb4p7-eth0" Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.062 [INFO][4959] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" HandleID="k8s-pod-network.69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" Workload="ip--172--31--21--233-k8s-goldmane--768f4c5c69--mb4p7-eth0" Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.067 [INFO][4959] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" HandleID="k8s-pod-network.69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" Workload="ip--172--31--21--233-k8s-goldmane--768f4c5c69--mb4p7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032c740), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-233", "pod":"goldmane-768f4c5c69-mb4p7", "timestamp":"2025-07-06 23:26:18.053480726 +0000 UTC"}, Hostname:"ip-172-31-21-233", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.067 [INFO][4959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.067 [INFO][4959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.068 [INFO][4959] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-233' Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.134 [INFO][4959] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" host="ip-172-31-21-233" Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.217 [INFO][4959] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-233" Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.240 [INFO][4959] ipam/ipam.go 511: Trying affinity for 192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.259 [INFO][4959] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.268 [INFO][4959] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.268 [INFO][4959] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" host="ip-172-31-21-233" Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.271 [INFO][4959] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76 Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.285 [INFO][4959] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" host="ip-172-31-21-233" Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.318 [INFO][4959] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.5/26] block=192.168.107.0/26 handle="k8s-pod-network.69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" host="ip-172-31-21-233" Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.320 [INFO][4959] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.5/26] handle="k8s-pod-network.69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" host="ip-172-31-21-233" Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.321 [INFO][4959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:26:18.464358 containerd[1994]: 2025-07-06 23:26:18.321 [INFO][4959] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.5/26] IPv6=[] ContainerID="69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" HandleID="k8s-pod-network.69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" Workload="ip--172--31--21--233-k8s-goldmane--768f4c5c69--mb4p7-eth0" Jul 6 23:26:18.471069 containerd[1994]: 2025-07-06 23:26:18.345 [INFO][4889] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" Namespace="calico-system" Pod="goldmane-768f4c5c69-mb4p7" WorkloadEndpoint="ip--172--31--21--233-k8s-goldmane--768f4c5c69--mb4p7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--233-k8s-goldmane--768f4c5c69--mb4p7-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"f38d912d-4d66-4f1b-9595-521101a042ac", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-233", ContainerID:"", Pod:"goldmane-768f4c5c69-mb4p7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.107.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif641e2fea16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:26:18.471069 containerd[1994]: 2025-07-06 23:26:18.345 [INFO][4889] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.5/32] ContainerID="69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" Namespace="calico-system" Pod="goldmane-768f4c5c69-mb4p7" WorkloadEndpoint="ip--172--31--21--233-k8s-goldmane--768f4c5c69--mb4p7-eth0" Jul 6 23:26:18.471069 containerd[1994]: 2025-07-06 23:26:18.345 [INFO][4889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif641e2fea16 ContainerID="69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" Namespace="calico-system" Pod="goldmane-768f4c5c69-mb4p7" WorkloadEndpoint="ip--172--31--21--233-k8s-goldmane--768f4c5c69--mb4p7-eth0" Jul 6 23:26:18.471069 containerd[1994]: 2025-07-06 23:26:18.397 [INFO][4889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" Namespace="calico-system" Pod="goldmane-768f4c5c69-mb4p7" WorkloadEndpoint="ip--172--31--21--233-k8s-goldmane--768f4c5c69--mb4p7-eth0" Jul 6 23:26:18.471069 containerd[1994]: 2025-07-06 23:26:18.413 [INFO][4889] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" Namespace="calico-system" Pod="goldmane-768f4c5c69-mb4p7" WorkloadEndpoint="ip--172--31--21--233-k8s-goldmane--768f4c5c69--mb4p7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--233-k8s-goldmane--768f4c5c69--mb4p7-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"f38d912d-4d66-4f1b-9595-521101a042ac", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-233", ContainerID:"69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76", Pod:"goldmane-768f4c5c69-mb4p7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.107.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif641e2fea16", MAC:"da:e0:21:e2:f2:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:26:18.471069 containerd[1994]: 2025-07-06 23:26:18.456 [INFO][4889] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" Namespace="calico-system" Pod="goldmane-768f4c5c69-mb4p7" WorkloadEndpoint="ip--172--31--21--233-k8s-goldmane--768f4c5c69--mb4p7-eth0" Jul 6 23:26:18.465791 systemd-networkd[1877]: calife9cb6711f9: Gained IPv6LL Jul 6 23:26:18.521298 containerd[1994]: time="2025-07-06T23:26:18.521235160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s2z7k,Uid:5a713856-581b-4c37-b03c-ba0db6757d38,Namespace:kube-system,Attempt:0,}" Jul 6 23:26:18.655760 systemd-networkd[1877]: caliae5cd91c8e0: Link UP Jul 6 23:26:18.658177 systemd-networkd[1877]: cali686417e4867: Gained IPv6LL Jul 6 23:26:18.667350 systemd-networkd[1877]: caliae5cd91c8e0: Gained carrier Jul 6 23:26:18.706418 containerd[1994]: time="2025-07-06T23:26:18.706166237Z" level=info msg="connecting to shim 69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76" address="unix:///run/containerd/s/b71aab05ab523064d750b012b49bcdaeec318c5685b4fea30aacdfffb13c3bdc" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:17.315 [INFO][4886] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:17.472 [INFO][4886] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--233-k8s-coredns--674b8bbfcf--x28nd-eth0 coredns-674b8bbfcf- kube-system cf658958-f190-4039-905e-8d2608b3af70 839 0 2025-07-06 23:25:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-233 coredns-674b8bbfcf-x28nd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliae5cd91c8e0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" Namespace="kube-system" Pod="coredns-674b8bbfcf-x28nd" WorkloadEndpoint="ip--172--31--21--233-k8s-coredns--674b8bbfcf--x28nd-" Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:17.474 [INFO][4886] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" Namespace="kube-system" Pod="coredns-674b8bbfcf-x28nd" WorkloadEndpoint="ip--172--31--21--233-k8s-coredns--674b8bbfcf--x28nd-eth0" Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.085 [INFO][4969] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" HandleID="k8s-pod-network.c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" Workload="ip--172--31--21--233-k8s-coredns--674b8bbfcf--x28nd-eth0" Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.086 [INFO][4969] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" HandleID="k8s-pod-network.c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" Workload="ip--172--31--21--233-k8s-coredns--674b8bbfcf--x28nd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c9cd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-233", "pod":"coredns-674b8bbfcf-x28nd", "timestamp":"2025-07-06 23:26:18.085296518 +0000 UTC"}, Hostname:"ip-172-31-21-233", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.087 [INFO][4969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.321 [INFO][4969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.321 [INFO][4969] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-233' Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.370 [INFO][4969] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" host="ip-172-31-21-233" Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.426 [INFO][4969] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-233" Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.441 [INFO][4969] ipam/ipam.go 511: Trying affinity for 192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.455 [INFO][4969] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.476 [INFO][4969] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.481 [INFO][4969] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" host="ip-172-31-21-233" Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.517 [INFO][4969] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276 Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.550 [INFO][4969] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" host="ip-172-31-21-233" Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.597 [INFO][4969] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.6/26] block=192.168.107.0/26 handle="k8s-pod-network.c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" host="ip-172-31-21-233" Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.600 [INFO][4969] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.6/26] handle="k8s-pod-network.c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" host="ip-172-31-21-233" Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.601 [INFO][4969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:26:18.782403 containerd[1994]: 2025-07-06 23:26:18.601 [INFO][4969] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.6/26] IPv6=[] ContainerID="c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" HandleID="k8s-pod-network.c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" Workload="ip--172--31--21--233-k8s-coredns--674b8bbfcf--x28nd-eth0" Jul 6 23:26:18.785861 containerd[1994]: 2025-07-06 23:26:18.625 [INFO][4886] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" Namespace="kube-system" Pod="coredns-674b8bbfcf-x28nd" WorkloadEndpoint="ip--172--31--21--233-k8s-coredns--674b8bbfcf--x28nd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--233-k8s-coredns--674b8bbfcf--x28nd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cf658958-f190-4039-905e-8d2608b3af70", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 25, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-233", ContainerID:"", Pod:"coredns-674b8bbfcf-x28nd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae5cd91c8e0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:26:18.785861 containerd[1994]: 2025-07-06 23:26:18.627 [INFO][4886] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.6/32] ContainerID="c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" Namespace="kube-system" Pod="coredns-674b8bbfcf-x28nd" WorkloadEndpoint="ip--172--31--21--233-k8s-coredns--674b8bbfcf--x28nd-eth0" Jul 6 23:26:18.785861 containerd[1994]: 2025-07-06 23:26:18.629 [INFO][4886] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae5cd91c8e0 ContainerID="c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" Namespace="kube-system" Pod="coredns-674b8bbfcf-x28nd" WorkloadEndpoint="ip--172--31--21--233-k8s-coredns--674b8bbfcf--x28nd-eth0" Jul 6 23:26:18.785861 containerd[1994]: 2025-07-06 23:26:18.677 [INFO][4886] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" Namespace="kube-system" Pod="coredns-674b8bbfcf-x28nd" WorkloadEndpoint="ip--172--31--21--233-k8s-coredns--674b8bbfcf--x28nd-eth0" Jul 6 23:26:18.785861 containerd[1994]: 2025-07-06 23:26:18.684 [INFO][4886] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" Namespace="kube-system" Pod="coredns-674b8bbfcf-x28nd" WorkloadEndpoint="ip--172--31--21--233-k8s-coredns--674b8bbfcf--x28nd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--233-k8s-coredns--674b8bbfcf--x28nd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cf658958-f190-4039-905e-8d2608b3af70", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 25, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-233", ContainerID:"c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276", Pod:"coredns-674b8bbfcf-x28nd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae5cd91c8e0", MAC:"de:1f:55:c2:32:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:26:18.786326 containerd[1994]: 2025-07-06 23:26:18.754 [INFO][4886] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" Namespace="kube-system" Pod="coredns-674b8bbfcf-x28nd" WorkloadEndpoint="ip--172--31--21--233-k8s-coredns--674b8bbfcf--x28nd-eth0" Jul 6 23:26:18.876695 systemd[1]: Started cri-containerd-69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76.scope - libcontainer container 69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76. Jul 6 23:26:18.946932 systemd-networkd[1877]: calia7f0831b898: Link UP Jul 6 23:26:18.951863 systemd-networkd[1877]: calia7f0831b898: Gained carrier Jul 6 23:26:18.984382 containerd[1994]: time="2025-07-06T23:26:18.984159967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd9d598d6-9mrmm,Uid:28e5776a-359b-488d-822c-afbf214fd771,Namespace:calico-system,Attempt:0,} returns sandbox id \"5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9\"" Jul 6 23:26:19.032558 containerd[1994]: time="2025-07-06T23:26:19.032372391Z" level=info msg="connecting to shim c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276" address="unix:///run/containerd/s/836ad141e4409164f857d2a4aa1a8f40f1e793e53d1e9f387070198592c12d85" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:17.847 [INFO][4968] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--233-k8s-csi--node--driver--mhl8x-eth0 csi-node-driver- calico-system 4b45ea5b-9b83-404d-8a3d-f277c8e8af9a 690 0 2025-07-06 23:25:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-21-233 csi-node-driver-mhl8x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia7f0831b898 [] [] }} ContainerID="029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" Namespace="calico-system" Pod="csi-node-driver-mhl8x" WorkloadEndpoint="ip--172--31--21--233-k8s-csi--node--driver--mhl8x-" Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:17.847 [INFO][4968] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" Namespace="calico-system" Pod="csi-node-driver-mhl8x" WorkloadEndpoint="ip--172--31--21--233-k8s-csi--node--driver--mhl8x-eth0" Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.189 [INFO][5017] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" HandleID="k8s-pod-network.029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" Workload="ip--172--31--21--233-k8s-csi--node--driver--mhl8x-eth0" Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.190 [INFO][5017] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" HandleID="k8s-pod-network.029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" Workload="ip--172--31--21--233-k8s-csi--node--driver--mhl8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001227a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-233", "pod":"csi-node-driver-mhl8x", "timestamp":"2025-07-06 23:26:18.186945459 +0000 UTC"}, Hostname:"ip-172-31-21-233", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.192 [INFO][5017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.601 [INFO][5017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.603 [INFO][5017] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-233' Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.670 [INFO][5017] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" host="ip-172-31-21-233" Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.711 [INFO][5017] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-233" Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.736 [INFO][5017] ipam/ipam.go 511: Trying affinity for 192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.759 [INFO][5017] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.770 [INFO][5017] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.770 [INFO][5017] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" host="ip-172-31-21-233" Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.779 [INFO][5017] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084 Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.805 [INFO][5017] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" host="ip-172-31-21-233" Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.846 [INFO][5017] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.7/26] block=192.168.107.0/26 handle="k8s-pod-network.029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" host="ip-172-31-21-233" Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.846 [INFO][5017] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.7/26] handle="k8s-pod-network.029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" host="ip-172-31-21-233" Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.852 [INFO][5017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:26:19.045879 containerd[1994]: 2025-07-06 23:26:18.853 [INFO][5017] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.7/26] IPv6=[] ContainerID="029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" HandleID="k8s-pod-network.029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" Workload="ip--172--31--21--233-k8s-csi--node--driver--mhl8x-eth0" Jul 6 23:26:19.047149 containerd[1994]: 2025-07-06 23:26:18.898 [INFO][4968] cni-plugin/k8s.go 418: Populated endpoint ContainerID="029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" Namespace="calico-system" Pod="csi-node-driver-mhl8x" WorkloadEndpoint="ip--172--31--21--233-k8s-csi--node--driver--mhl8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--233-k8s-csi--node--driver--mhl8x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4b45ea5b-9b83-404d-8a3d-f277c8e8af9a", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-233", ContainerID:"", Pod:"csi-node-driver-mhl8x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia7f0831b898", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:26:19.047149 containerd[1994]: 2025-07-06 23:26:18.899 [INFO][4968] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.7/32] ContainerID="029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" Namespace="calico-system" Pod="csi-node-driver-mhl8x" WorkloadEndpoint="ip--172--31--21--233-k8s-csi--node--driver--mhl8x-eth0" Jul 6 23:26:19.047149 containerd[1994]: 2025-07-06 23:26:18.902 [INFO][4968] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7f0831b898 ContainerID="029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" Namespace="calico-system" Pod="csi-node-driver-mhl8x" WorkloadEndpoint="ip--172--31--21--233-k8s-csi--node--driver--mhl8x-eth0" Jul 6 23:26:19.047149 containerd[1994]: 2025-07-06 23:26:18.955 [INFO][4968] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" Namespace="calico-system" Pod="csi-node-driver-mhl8x" WorkloadEndpoint="ip--172--31--21--233-k8s-csi--node--driver--mhl8x-eth0" Jul 6 23:26:19.047149 containerd[1994]: 2025-07-06 23:26:18.977 [INFO][4968] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" Namespace="calico-system" Pod="csi-node-driver-mhl8x" WorkloadEndpoint="ip--172--31--21--233-k8s-csi--node--driver--mhl8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--233-k8s-csi--node--driver--mhl8x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4b45ea5b-9b83-404d-8a3d-f277c8e8af9a", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-233", ContainerID:"029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084", Pod:"csi-node-driver-mhl8x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia7f0831b898", MAC:"c2:49:24:a1:fe:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:26:19.047149 containerd[1994]: 2025-07-06 23:26:19.028 [INFO][4968] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" Namespace="calico-system" Pod="csi-node-driver-mhl8x" WorkloadEndpoint="ip--172--31--21--233-k8s-csi--node--driver--mhl8x-eth0" Jul 6 23:26:19.200051 systemd[1]: Started cri-containerd-c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276.scope - libcontainer container c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276. Jul 6 23:26:19.235490 containerd[1994]: time="2025-07-06T23:26:19.235377976Z" level=info msg="connecting to shim 029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084" address="unix:///run/containerd/s/251de9ee92aeda3362ffef59fb12e012a9397d656ec99dea098d20beea22023a" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:26:19.325242 containerd[1994]: time="2025-07-06T23:26:19.324788224Z" level=info msg="StartContainer for \"b5082889313e4240a23a2eccfb61369919f43752c8305b4d1f5aabd741aa57af\" returns successfully" Jul 6 23:26:19.422639 containerd[1994]: time="2025-07-06T23:26:19.422082965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x28nd,Uid:cf658958-f190-4039-905e-8d2608b3af70,Namespace:kube-system,Attempt:0,} returns sandbox id \"c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276\"" Jul 6 23:26:19.459142 systemd[1]: Started cri-containerd-029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084.scope - libcontainer container 029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084. Jul 6 23:26:19.472906 containerd[1994]: time="2025-07-06T23:26:19.472692737Z" level=info msg="CreateContainer within sandbox \"c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:26:19.489383 systemd-networkd[1877]: calif641e2fea16: Gained IPv6LL Jul 6 23:26:19.508091 containerd[1994]: time="2025-07-06T23:26:19.506884553Z" level=info msg="Container 9a6ccd87506d4f92bd143ec74e7a6c067d37cee96debeb68d275aad5ecdf0367: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:26:19.556196 containerd[1994]: time="2025-07-06T23:26:19.556011389Z" level=info msg="CreateContainer within sandbox \"c13864fb5da66957e09d5b5ea85fb953bc8c52091f2a815fe7182cfeb6c8e276\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a6ccd87506d4f92bd143ec74e7a6c067d37cee96debeb68d275aad5ecdf0367\"" Jul 6 23:26:19.560646 containerd[1994]: time="2025-07-06T23:26:19.560173673Z" level=info msg="StartContainer for \"9a6ccd87506d4f92bd143ec74e7a6c067d37cee96debeb68d275aad5ecdf0367\"" Jul 6 23:26:19.590611 containerd[1994]: time="2025-07-06T23:26:19.590461782Z" level=info msg="connecting to shim 9a6ccd87506d4f92bd143ec74e7a6c067d37cee96debeb68d275aad5ecdf0367" address="unix:///run/containerd/s/836ad141e4409164f857d2a4aa1a8f40f1e793e53d1e9f387070198592c12d85" protocol=ttrpc version=3 Jul 6 23:26:19.684035 systemd[1]: Started cri-containerd-9a6ccd87506d4f92bd143ec74e7a6c067d37cee96debeb68d275aad5ecdf0367.scope - libcontainer container 9a6ccd87506d4f92bd143ec74e7a6c067d37cee96debeb68d275aad5ecdf0367. Jul 6 23:26:19.714100 systemd-networkd[1877]: cali2d0dd486431: Link UP Jul 6 23:26:19.717941 systemd-networkd[1877]: cali2d0dd486431: Gained carrier Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.037 [INFO][5066] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--233-k8s-coredns--674b8bbfcf--s2z7k-eth0 coredns-674b8bbfcf- kube-system 5a713856-581b-4c37-b03c-ba0db6757d38 844 0 2025-07-06 23:25:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-233 coredns-674b8bbfcf-s2z7k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2d0dd486431 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" Namespace="kube-system" Pod="coredns-674b8bbfcf-s2z7k" WorkloadEndpoint="ip--172--31--21--233-k8s-coredns--674b8bbfcf--s2z7k-" Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.040 [INFO][5066] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" Namespace="kube-system" Pod="coredns-674b8bbfcf-s2z7k" WorkloadEndpoint="ip--172--31--21--233-k8s-coredns--674b8bbfcf--s2z7k-eth0" Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.400 [INFO][5162] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" HandleID="k8s-pod-network.ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" Workload="ip--172--31--21--233-k8s-coredns--674b8bbfcf--s2z7k-eth0" Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.404 [INFO][5162] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" HandleID="k8s-pod-network.ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" Workload="ip--172--31--21--233-k8s-coredns--674b8bbfcf--s2z7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dad0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-233", "pod":"coredns-674b8bbfcf-s2z7k", "timestamp":"2025-07-06 23:26:19.400491305 +0000 UTC"}, Hostname:"ip-172-31-21-233", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.404 [INFO][5162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.405 [INFO][5162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.405 [INFO][5162] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-233' Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.449 [INFO][5162] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" host="ip-172-31-21-233" Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.481 [INFO][5162] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-233" Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.502 [INFO][5162] ipam/ipam.go 511: Trying affinity for 192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.534 [INFO][5162] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.556 [INFO][5162] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ip-172-31-21-233" Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.556 [INFO][5162] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" host="ip-172-31-21-233" Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.568 [INFO][5162] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030 Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.616 [INFO][5162] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" host="ip-172-31-21-233" Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.656 [INFO][5162] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.8/26] block=192.168.107.0/26 handle="k8s-pod-network.ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" host="ip-172-31-21-233" Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.658 [INFO][5162] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.8/26] handle="k8s-pod-network.ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" host="ip-172-31-21-233" Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.658 [INFO][5162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:26:19.824668 containerd[1994]: 2025-07-06 23:26:19.658 [INFO][5162] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.8/26] IPv6=[] ContainerID="ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" HandleID="k8s-pod-network.ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" Workload="ip--172--31--21--233-k8s-coredns--674b8bbfcf--s2z7k-eth0" Jul 6 23:26:19.828247 containerd[1994]: 2025-07-06 23:26:19.681 [INFO][5066] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" Namespace="kube-system" Pod="coredns-674b8bbfcf-s2z7k" WorkloadEndpoint="ip--172--31--21--233-k8s-coredns--674b8bbfcf--s2z7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--233-k8s-coredns--674b8bbfcf--s2z7k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5a713856-581b-4c37-b03c-ba0db6757d38", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 25, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-233", ContainerID:"", Pod:"coredns-674b8bbfcf-s2z7k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d0dd486431", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:26:19.828247 containerd[1994]: 2025-07-06 23:26:19.682 [INFO][5066] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.8/32] ContainerID="ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" Namespace="kube-system" Pod="coredns-674b8bbfcf-s2z7k" WorkloadEndpoint="ip--172--31--21--233-k8s-coredns--674b8bbfcf--s2z7k-eth0" Jul 6 23:26:19.828247 containerd[1994]: 2025-07-06 23:26:19.682 [INFO][5066] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d0dd486431 ContainerID="ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" Namespace="kube-system" Pod="coredns-674b8bbfcf-s2z7k" WorkloadEndpoint="ip--172--31--21--233-k8s-coredns--674b8bbfcf--s2z7k-eth0" Jul 6 23:26:19.828247 containerd[1994]: 2025-07-06 23:26:19.744 [INFO][5066] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" Namespace="kube-system" Pod="coredns-674b8bbfcf-s2z7k" WorkloadEndpoint="ip--172--31--21--233-k8s-coredns--674b8bbfcf--s2z7k-eth0" Jul 6 23:26:19.828247 containerd[1994]: 2025-07-06 23:26:19.773 [INFO][5066] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" Namespace="kube-system" Pod="coredns-674b8bbfcf-s2z7k" WorkloadEndpoint="ip--172--31--21--233-k8s-coredns--674b8bbfcf--s2z7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--233-k8s-coredns--674b8bbfcf--s2z7k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5a713856-581b-4c37-b03c-ba0db6757d38", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 25, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-233", ContainerID:"ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030", Pod:"coredns-674b8bbfcf-s2z7k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d0dd486431", MAC:"de:12:12:67:16:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:26:19.829337 containerd[1994]: 2025-07-06 23:26:19.809 [INFO][5066] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" Namespace="kube-system" Pod="coredns-674b8bbfcf-s2z7k" WorkloadEndpoint="ip--172--31--21--233-k8s-coredns--674b8bbfcf--s2z7k-eth0" Jul 6 23:26:19.837354 containerd[1994]: time="2025-07-06T23:26:19.837281227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-mb4p7,Uid:f38d912d-4d66-4f1b-9595-521101a042ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76\"" Jul 6 23:26:19.925639 containerd[1994]: time="2025-07-06T23:26:19.925141135Z" level=info msg="StartContainer for \"9a6ccd87506d4f92bd143ec74e7a6c067d37cee96debeb68d275aad5ecdf0367\" returns successfully" Jul 6 23:26:19.966280 containerd[1994]: time="2025-07-06T23:26:19.966143899Z" level=info msg="connecting to shim ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030" address="unix:///run/containerd/s/5e02f0a27dea274ca1ce32ae394031715a910f0bc62a91317c5e8ad46bc145b2" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:26:19.984994 containerd[1994]: time="2025-07-06T23:26:19.984695155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mhl8x,Uid:4b45ea5b-9b83-404d-8a3d-f277c8e8af9a,Namespace:calico-system,Attempt:0,} returns sandbox id \"029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084\"" Jul 6 23:26:20.129277 kubelet[3301]: I0706 23:26:20.127742 3301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-x28nd" podStartSLOduration=51.127713532 podStartE2EDuration="51.127713532s" podCreationTimestamp="2025-07-06 23:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:26:20.126758824 +0000 UTC m=+57.904463328" watchObservedRunningTime="2025-07-06 23:26:20.127713532 +0000 UTC m=+57.905417988" Jul 6 23:26:20.155471 systemd[1]: Started cri-containerd-ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030.scope - libcontainer container ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030. Jul 6 23:26:20.192978 systemd-networkd[1877]: caliae5cd91c8e0: Gained IPv6LL Jul 6 23:26:20.484731 systemd-networkd[1877]: vxlan.calico: Link UP Jul 6 23:26:20.484757 systemd-networkd[1877]: vxlan.calico: Gained carrier Jul 6 23:26:20.526165 containerd[1994]: time="2025-07-06T23:26:20.526039554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s2z7k,Uid:5a713856-581b-4c37-b03c-ba0db6757d38,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030\"" Jul 6 23:26:20.567711 containerd[1994]: time="2025-07-06T23:26:20.567519090Z" level=info msg="CreateContainer within sandbox \"ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:26:20.637664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901024213.mount: Deactivated successfully. Jul 6 23:26:20.651214 containerd[1994]: time="2025-07-06T23:26:20.650699707Z" level=info msg="Container 658117a04b92e4d6739254495f911a3bb43bf44495a286bb4d996fd244fb19b0: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:26:20.652961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692004082.mount: Deactivated successfully. Jul 6 23:26:20.696428 containerd[1994]: time="2025-07-06T23:26:20.696120823Z" level=info msg="CreateContainer within sandbox \"ac0fb6fa15d551fc9f2daf6c3bccc02b0bf4577efff647fa7279f7fd8ad4c030\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"658117a04b92e4d6739254495f911a3bb43bf44495a286bb4d996fd244fb19b0\"" Jul 6 23:26:20.698798 containerd[1994]: time="2025-07-06T23:26:20.698703199Z" level=info msg="StartContainer for \"658117a04b92e4d6739254495f911a3bb43bf44495a286bb4d996fd244fb19b0\"" Jul 6 23:26:20.702253 containerd[1994]: time="2025-07-06T23:26:20.702157255Z" level=info msg="connecting to shim 658117a04b92e4d6739254495f911a3bb43bf44495a286bb4d996fd244fb19b0" address="unix:///run/containerd/s/5e02f0a27dea274ca1ce32ae394031715a910f0bc62a91317c5e8ad46bc145b2" protocol=ttrpc version=3 Jul 6 23:26:20.711367 kubelet[3301]: I0706 23:26:20.710457 3301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:26:20.850152 systemd[1]: Started cri-containerd-658117a04b92e4d6739254495f911a3bb43bf44495a286bb4d996fd244fb19b0.scope - libcontainer container 658117a04b92e4d6739254495f911a3bb43bf44495a286bb4d996fd244fb19b0. Jul 6 23:26:20.896963 systemd-networkd[1877]: calia7f0831b898: Gained IPv6LL Jul 6 23:26:21.214780 containerd[1994]: time="2025-07-06T23:26:21.214465830Z" level=info msg="StartContainer for \"658117a04b92e4d6739254495f911a3bb43bf44495a286bb4d996fd244fb19b0\" returns successfully" Jul 6 23:26:21.344806 systemd-networkd[1877]: cali2d0dd486431: Gained IPv6LL Jul 6 23:26:21.794383 systemd-networkd[1877]: vxlan.calico: Gained IPv6LL Jul 6 23:26:21.939682 containerd[1994]: time="2025-07-06T23:26:21.939626697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fca8f6215dcf77b287ab4332b600545820b917f25002d888153d5f5aa57f16e3\" id:\"9d8393d1069842570cfe1b3fe6f47816e520736d9f6436b955e1d21919350792\" pid:5408 exit_status:1 exited_at:{seconds:1751844381 nanos:937206477}" Jul 6 23:26:22.266922 kubelet[3301]: I0706 23:26:22.266028 3301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-s2z7k" podStartSLOduration=53.266002771 podStartE2EDuration="53.266002771s" podCreationTimestamp="2025-07-06 23:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:26:22.184115994 +0000 UTC m=+59.961820522" watchObservedRunningTime="2025-07-06 23:26:22.266002771 +0000 UTC m=+60.043707215" Jul 6 23:26:23.203947 systemd[1]: Started sshd@8-172.31.21.233:22-147.75.109.163:59686.service - OpenSSH per-connection server daemon (147.75.109.163:59686). Jul 6 23:26:23.293170 containerd[1994]: time="2025-07-06T23:26:23.292941272Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fca8f6215dcf77b287ab4332b600545820b917f25002d888153d5f5aa57f16e3\" id:\"8618af9441cb49e5f2abd846ea9bc20e33bd9d5ffbbf2b2b576294d861364535\" pid:5455 exit_status:1 exited_at:{seconds:1751844383 nanos:289141880}" Jul 6 23:26:23.509108 sshd[5509]: Accepted publickey for core from 147.75.109.163 port 59686 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:26:23.520316 sshd-session[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:26:23.545355 systemd-logind[1972]: New session 9 of user core. Jul 6 23:26:23.553392 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:26:23.908994 containerd[1994]: time="2025-07-06T23:26:23.908443643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:23.911518 containerd[1994]: time="2025-07-06T23:26:23.911417759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 6 23:26:23.914355 containerd[1994]: time="2025-07-06T23:26:23.913444619Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:23.922292 containerd[1994]: time="2025-07-06T23:26:23.922086839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:23.926561 containerd[1994]: time="2025-07-06T23:26:23.924741839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 5.875183613s" Jul 6 23:26:23.926561 containerd[1994]: time="2025-07-06T23:26:23.924835919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:26:23.928594 containerd[1994]: time="2025-07-06T23:26:23.928521035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:26:23.942880 containerd[1994]: time="2025-07-06T23:26:23.942378539Z" level=info msg="CreateContainer within sandbox \"501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:26:23.962923 containerd[1994]: time="2025-07-06T23:26:23.961603763Z" level=info msg="Container cfec4106eb7db453d02cf1a4d18693379e14ecbe72e578fb8e041d9269e222e6: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:26:23.977637 sshd[5517]: Connection closed by 147.75.109.163 port 59686 Jul 6 23:26:23.977754 sshd-session[5509]: pam_unix(sshd:session): session closed for user core Jul 6 23:26:23.991636 systemd[1]: sshd@8-172.31.21.233:22-147.75.109.163:59686.service: Deactivated successfully. Jul 6 23:26:24.000800 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:26:24.009476 systemd-logind[1972]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:26:24.016508 systemd-logind[1972]: Removed session 9. Jul 6 23:26:24.022007 containerd[1994]: time="2025-07-06T23:26:24.021928820Z" level=info msg="CreateContainer within sandbox \"501a78222863ca7b15b0285acf3ad4b68b65072ad1b4c02bb46de032d98c8f35\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cfec4106eb7db453d02cf1a4d18693379e14ecbe72e578fb8e041d9269e222e6\"" Jul 6 23:26:24.029660 containerd[1994]: time="2025-07-06T23:26:24.029204132Z" level=info msg="StartContainer for \"cfec4106eb7db453d02cf1a4d18693379e14ecbe72e578fb8e041d9269e222e6\"" Jul 6 23:26:24.035337 containerd[1994]: time="2025-07-06T23:26:24.033630008Z" level=info msg="connecting to shim cfec4106eb7db453d02cf1a4d18693379e14ecbe72e578fb8e041d9269e222e6" address="unix:///run/containerd/s/e4a7f4b9913b01be17886acc08c1135aa8028d1804776bffb76f41ac43063706" protocol=ttrpc version=3 Jul 6 23:26:24.098972 systemd[1]: Started cri-containerd-cfec4106eb7db453d02cf1a4d18693379e14ecbe72e578fb8e041d9269e222e6.scope - libcontainer container cfec4106eb7db453d02cf1a4d18693379e14ecbe72e578fb8e041d9269e222e6. Jul 6 23:26:24.310428 containerd[1994]: time="2025-07-06T23:26:24.309768321Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:24.314074 containerd[1994]: time="2025-07-06T23:26:24.313984269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:26:24.316146 containerd[1994]: time="2025-07-06T23:26:24.316055673Z" level=info msg="StartContainer for \"cfec4106eb7db453d02cf1a4d18693379e14ecbe72e578fb8e041d9269e222e6\" returns successfully" Jul 6 23:26:24.328161 containerd[1994]: time="2025-07-06T23:26:24.328078821Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 397.421582ms" Jul 6 23:26:24.328161 containerd[1994]: time="2025-07-06T23:26:24.328155249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:26:24.333821 containerd[1994]: time="2025-07-06T23:26:24.332898729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:26:24.344928 containerd[1994]: time="2025-07-06T23:26:24.344813553Z" level=info msg="CreateContainer within sandbox \"04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:26:24.376618 containerd[1994]: time="2025-07-06T23:26:24.375092241Z" level=info msg="Container 194397028ad69e0b91f0674b528da9f818761dd948121972d89ecbf2fee9bde4: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:26:24.396419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount976984606.mount: Deactivated successfully. Jul 6 23:26:24.407653 containerd[1994]: time="2025-07-06T23:26:24.407427393Z" level=info msg="CreateContainer within sandbox \"04bdf1c48fa58f1a1280d95ed1b1cfeae060b534f361b092b57c68888b3d66c3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"194397028ad69e0b91f0674b528da9f818761dd948121972d89ecbf2fee9bde4\"" Jul 6 23:26:24.412163 containerd[1994]: time="2025-07-06T23:26:24.411639441Z" level=info msg="StartContainer for \"194397028ad69e0b91f0674b528da9f818761dd948121972d89ecbf2fee9bde4\"" Jul 6 23:26:24.420791 containerd[1994]: time="2025-07-06T23:26:24.420707278Z" level=info msg="connecting to shim 194397028ad69e0b91f0674b528da9f818761dd948121972d89ecbf2fee9bde4" address="unix:///run/containerd/s/21f23d1e547a2403e85efae595f0362dd6b1f195cbaa7350cbe871a31422d9d1" protocol=ttrpc version=3 Jul 6 23:26:24.508761 systemd[1]: Started cri-containerd-194397028ad69e0b91f0674b528da9f818761dd948121972d89ecbf2fee9bde4.scope - libcontainer container 194397028ad69e0b91f0674b528da9f818761dd948121972d89ecbf2fee9bde4. Jul 6 23:26:24.813931 containerd[1994]: time="2025-07-06T23:26:24.813830927Z" level=info msg="StartContainer for \"194397028ad69e0b91f0674b528da9f818761dd948121972d89ecbf2fee9bde4\" returns successfully" Jul 6 23:26:25.236459 kubelet[3301]: I0706 23:26:25.235419 3301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-749ffbdbfc-fxkxv" podStartSLOduration=37.934114531 podStartE2EDuration="44.235388218s" podCreationTimestamp="2025-07-06 23:25:41 +0000 UTC" firstStartedPulling="2025-07-06 23:26:17.625997824 +0000 UTC m=+55.403702256" lastFinishedPulling="2025-07-06 23:26:23.927271415 +0000 UTC m=+61.704975943" observedRunningTime="2025-07-06 23:26:25.202449621 +0000 UTC m=+62.980154077" watchObservedRunningTime="2025-07-06 23:26:25.235388218 +0000 UTC m=+63.013092686" Jul 6 23:26:25.404108 ntpd[1965]: Listen normally on 8 vxlan.calico 192.168.107.0:123 Jul 6 23:26:25.404752 ntpd[1965]: Listen normally on 9 cali3629fe64442 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 6 23:26:25.405751 ntpd[1965]: 6 Jul 23:26:25 ntpd[1965]: Listen normally on 8 vxlan.calico 192.168.107.0:123 Jul 6 23:26:25.405751 ntpd[1965]: 6 Jul 23:26:25 ntpd[1965]: Listen normally on 9 cali3629fe64442 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 6 23:26:25.405751 ntpd[1965]: 6 Jul 23:26:25 ntpd[1965]: Listen normally on 10 cali53fba45dba9 [fe80::ecee:eeff:feee:eeee%5]:123 Jul 6 23:26:25.405751 ntpd[1965]: 6 Jul 23:26:25 ntpd[1965]: Listen normally on 11 cali686417e4867 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 6 23:26:25.405751 ntpd[1965]: 6 Jul 23:26:25 ntpd[1965]: Listen normally on 12 calife9cb6711f9 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 6 23:26:25.405751 ntpd[1965]: 6 Jul 23:26:25 ntpd[1965]: Listen normally on 13 calif641e2fea16 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 6 23:26:25.405751 ntpd[1965]: 6 Jul 23:26:25 ntpd[1965]: Listen normally on 14 caliae5cd91c8e0 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 6 23:26:25.405751 ntpd[1965]: 6 Jul 23:26:25 ntpd[1965]: Listen normally on 15 calia7f0831b898 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 6 23:26:25.405751 ntpd[1965]: 6 Jul 23:26:25 ntpd[1965]: Listen normally on 16 cali2d0dd486431 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 6 23:26:25.405751 ntpd[1965]: 6 Jul 23:26:25 ntpd[1965]: Listen normally on 17 vxlan.calico [fe80::643b:27ff:fe48:a3e8%12]:123 Jul 6 23:26:25.404833 ntpd[1965]: Listen normally on 10 cali53fba45dba9 [fe80::ecee:eeff:feee:eeee%5]:123 Jul 6 23:26:25.404908 ntpd[1965]: Listen normally on 11 cali686417e4867 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 6 23:26:25.404971 ntpd[1965]: Listen normally on 12 calife9cb6711f9 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 6 23:26:25.405054 ntpd[1965]: Listen normally on 13 calif641e2fea16 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 6 23:26:25.405118 ntpd[1965]: Listen normally on 14 caliae5cd91c8e0 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 6 23:26:25.405178 ntpd[1965]: Listen normally on 15 calia7f0831b898 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 6 23:26:25.405240 ntpd[1965]: Listen normally on 16 cali2d0dd486431 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 6 23:26:25.405301 ntpd[1965]: Listen normally on 17 vxlan.calico [fe80::643b:27ff:fe48:a3e8%12]:123 Jul 6 23:26:26.188317 kubelet[3301]: I0706 23:26:26.185116 3301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:26:27.189795 kubelet[3301]: I0706 23:26:27.189752 3301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:26:28.301550 containerd[1994]: time="2025-07-06T23:26:28.301480669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:28.306639 containerd[1994]: time="2025-07-06T23:26:28.306537229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 6 23:26:28.308337 containerd[1994]: time="2025-07-06T23:26:28.308223409Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:28.316896 containerd[1994]: time="2025-07-06T23:26:28.316786357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:28.320688 containerd[1994]: time="2025-07-06T23:26:28.320017297Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.98701562s" Jul 6 23:26:28.321730 containerd[1994]: time="2025-07-06T23:26:28.321509401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 6 23:26:28.324536 containerd[1994]: time="2025-07-06T23:26:28.324097033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:26:28.369969 containerd[1994]: time="2025-07-06T23:26:28.369907861Z" level=info msg="CreateContainer within sandbox \"5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:26:28.403031 containerd[1994]: time="2025-07-06T23:26:28.402929941Z" level=info msg="Container 16bcfe9057030d25cae92232d2261d19a7a34bb6fb5ab5f1cc66e58d8db7ee2a: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:26:28.440397 containerd[1994]: time="2025-07-06T23:26:28.440198149Z" level=info msg="CreateContainer within sandbox \"5dd76d0c1dfcf213084bb60d15d5f2e1262280b164b96a1c0b6fecff93513ba9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"16bcfe9057030d25cae92232d2261d19a7a34bb6fb5ab5f1cc66e58d8db7ee2a\"" Jul 6 23:26:28.442236 containerd[1994]: time="2025-07-06T23:26:28.442095697Z" level=info msg="StartContainer for \"16bcfe9057030d25cae92232d2261d19a7a34bb6fb5ab5f1cc66e58d8db7ee2a\"" Jul 6 23:26:28.445343 containerd[1994]: time="2025-07-06T23:26:28.445277522Z" level=info msg="connecting to shim 16bcfe9057030d25cae92232d2261d19a7a34bb6fb5ab5f1cc66e58d8db7ee2a" address="unix:///run/containerd/s/7dd7998817cda6fed995efe043922d0a2b950dc6267f319148e5ead0da0ad941" protocol=ttrpc version=3 Jul 6 23:26:28.536147 systemd[1]: Started cri-containerd-16bcfe9057030d25cae92232d2261d19a7a34bb6fb5ab5f1cc66e58d8db7ee2a.scope - libcontainer container 16bcfe9057030d25cae92232d2261d19a7a34bb6fb5ab5f1cc66e58d8db7ee2a. Jul 6 23:26:28.834053 containerd[1994]: time="2025-07-06T23:26:28.833806947Z" level=info msg="StartContainer for \"16bcfe9057030d25cae92232d2261d19a7a34bb6fb5ab5f1cc66e58d8db7ee2a\" returns successfully" Jul 6 23:26:29.030870 systemd[1]: Started sshd@9-172.31.21.233:22-147.75.109.163:57672.service - OpenSSH per-connection server daemon (147.75.109.163:57672). Jul 6 23:26:29.274245 kubelet[3301]: I0706 23:26:29.274135 3301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-749ffbdbfc-th6v2" podStartSLOduration=41.892075174 podStartE2EDuration="48.27410699s" podCreationTimestamp="2025-07-06 23:25:41 +0000 UTC" firstStartedPulling="2025-07-06 23:26:17.948234017 +0000 UTC m=+55.725938449" lastFinishedPulling="2025-07-06 23:26:24.330265833 +0000 UTC m=+62.107970265" observedRunningTime="2025-07-06 23:26:25.239802562 +0000 UTC m=+63.017507030" watchObservedRunningTime="2025-07-06 23:26:29.27410699 +0000 UTC m=+67.051811434" Jul 6 23:26:29.294647 sshd[5661]: Accepted publickey for core from 147.75.109.163 port 57672 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:26:29.301024 sshd-session[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:26:29.329184 systemd-logind[1972]: New session 10 of user core. Jul 6 23:26:29.335119 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:26:29.794729 sshd[5683]: Connection closed by 147.75.109.163 port 57672 Jul 6 23:26:29.795785 sshd-session[5661]: pam_unix(sshd:session): session closed for user core Jul 6 23:26:29.814102 systemd[1]: sshd@9-172.31.21.233:22-147.75.109.163:57672.service: Deactivated successfully. Jul 6 23:26:29.824428 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:26:29.831714 systemd-logind[1972]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:26:29.867775 systemd[1]: Started sshd@10-172.31.21.233:22-147.75.109.163:57680.service - OpenSSH per-connection server daemon (147.75.109.163:57680). Jul 6 23:26:29.871496 systemd-logind[1972]: Removed session 10. Jul 6 23:26:29.885818 containerd[1994]: time="2025-07-06T23:26:29.885227585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16bcfe9057030d25cae92232d2261d19a7a34bb6fb5ab5f1cc66e58d8db7ee2a\" id:\"3fb87a5d1b7630978f063d34a594ace1a4df2fca951f4bdd77b2266fafb199c9\" pid:5677 exited_at:{seconds:1751844389 nanos:880905053}" Jul 6 23:26:29.931629 kubelet[3301]: I0706 23:26:29.931463 3301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bd9d598d6-9mrmm" podStartSLOduration=28.603772615 podStartE2EDuration="37.931436849s" podCreationTimestamp="2025-07-06 23:25:52 +0000 UTC" firstStartedPulling="2025-07-06 23:26:18.995475619 +0000 UTC m=+56.773180063" lastFinishedPulling="2025-07-06 23:26:28.323139865 +0000 UTC m=+66.100844297" observedRunningTime="2025-07-06 23:26:29.27821693 +0000 UTC m=+67.055921410" watchObservedRunningTime="2025-07-06 23:26:29.931436849 +0000 UTC m=+67.709141293" Jul 6 23:26:30.132860 sshd[5700]: Accepted publickey for core from 147.75.109.163 port 57680 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:26:30.139515 sshd-session[5700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:26:30.155641 systemd-logind[1972]: New session 11 of user core. Jul 6 23:26:30.164507 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:26:30.764220 sshd[5707]: Connection closed by 147.75.109.163 port 57680 Jul 6 23:26:30.763099 sshd-session[5700]: pam_unix(sshd:session): session closed for user core Jul 6 23:26:30.779936 systemd[1]: sshd@10-172.31.21.233:22-147.75.109.163:57680.service: Deactivated successfully. Jul 6 23:26:30.787690 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:26:30.792332 systemd-logind[1972]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:26:30.839473 systemd[1]: Started sshd@11-172.31.21.233:22-147.75.109.163:57682.service - OpenSSH per-connection server daemon (147.75.109.163:57682). Jul 6 23:26:30.845332 systemd-logind[1972]: Removed session 11. Jul 6 23:26:31.095561 sshd[5720]: Accepted publickey for core from 147.75.109.163 port 57682 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:26:31.100693 sshd-session[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:26:31.122146 systemd-logind[1972]: New session 12 of user core. Jul 6 23:26:31.133561 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:26:31.525669 sshd[5722]: Connection closed by 147.75.109.163 port 57682 Jul 6 23:26:31.526949 sshd-session[5720]: pam_unix(sshd:session): session closed for user core Jul 6 23:26:31.537068 systemd-logind[1972]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:26:31.537355 systemd[1]: sshd@11-172.31.21.233:22-147.75.109.163:57682.service: Deactivated successfully. Jul 6 23:26:31.544852 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:26:31.555440 systemd-logind[1972]: Removed session 12. Jul 6 23:26:32.695739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4178765733.mount: Deactivated successfully. Jul 6 23:26:32.755555 containerd[1994]: time="2025-07-06T23:26:32.755360287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:32.758529 containerd[1994]: time="2025-07-06T23:26:32.758395747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 6 23:26:32.762456 containerd[1994]: time="2025-07-06T23:26:32.762363451Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:32.774921 containerd[1994]: time="2025-07-06T23:26:32.774758539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:32.778113 containerd[1994]: time="2025-07-06T23:26:32.777456151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 4.453275742s" Jul 6 23:26:32.778113 containerd[1994]: time="2025-07-06T23:26:32.777532507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 6 23:26:32.784325 containerd[1994]: time="2025-07-06T23:26:32.784032355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:26:32.796287 containerd[1994]: time="2025-07-06T23:26:32.796155547Z" level=info msg="CreateContainer within sandbox \"7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:26:32.817702 containerd[1994]: time="2025-07-06T23:26:32.817330831Z" level=info msg="Container e94cec5fc3ee2b56ed797a20f8d201394993302e536e9d1471046102c2296345: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:26:32.853718 containerd[1994]: time="2025-07-06T23:26:32.853533067Z" level=info msg="CreateContainer within sandbox \"7b1eb914830255dc29ba7f90d112ad793b18d02bb62693c9ed4d6cd9acaa90b0\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e94cec5fc3ee2b56ed797a20f8d201394993302e536e9d1471046102c2296345\"" Jul 6 23:26:32.856734 containerd[1994]: time="2025-07-06T23:26:32.856632883Z" level=info msg="StartContainer for \"e94cec5fc3ee2b56ed797a20f8d201394993302e536e9d1471046102c2296345\"" Jul 6 23:26:32.861317 containerd[1994]: time="2025-07-06T23:26:32.861218551Z" level=info msg="connecting to shim e94cec5fc3ee2b56ed797a20f8d201394993302e536e9d1471046102c2296345" address="unix:///run/containerd/s/9e7053c88eaf3e5fafeb2e5dc13cd0ec9c1228a1a2110babc541c1d27df474a4" protocol=ttrpc version=3 Jul 6 23:26:32.933167 systemd[1]: Started cri-containerd-e94cec5fc3ee2b56ed797a20f8d201394993302e536e9d1471046102c2296345.scope - libcontainer container e94cec5fc3ee2b56ed797a20f8d201394993302e536e9d1471046102c2296345. Jul 6 23:26:33.122303 containerd[1994]: time="2025-07-06T23:26:33.122225717Z" level=info msg="StartContainer for \"e94cec5fc3ee2b56ed797a20f8d201394993302e536e9d1471046102c2296345\" returns successfully" Jul 6 23:26:33.290334 kubelet[3301]: I0706 23:26:33.290180 3301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7b69cbc649-7qrk5" podStartSLOduration=2.7839300270000003 podStartE2EDuration="20.290123682s" podCreationTimestamp="2025-07-06 23:26:13 +0000 UTC" firstStartedPulling="2025-07-06 23:26:15.277126392 +0000 UTC m=+53.054830836" lastFinishedPulling="2025-07-06 23:26:32.783319975 +0000 UTC m=+70.561024491" observedRunningTime="2025-07-06 23:26:33.287465346 +0000 UTC m=+71.065169802" watchObservedRunningTime="2025-07-06 23:26:33.290123682 +0000 UTC m=+71.067828126" Jul 6 23:26:35.381530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1513032502.mount: Deactivated successfully. Jul 6 23:26:36.212075 containerd[1994]: time="2025-07-06T23:26:36.211978808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:36.213850 containerd[1994]: time="2025-07-06T23:26:36.213755300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 6 23:26:36.215347 containerd[1994]: time="2025-07-06T23:26:36.215260400Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:36.219230 containerd[1994]: time="2025-07-06T23:26:36.219138092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:36.221437 containerd[1994]: time="2025-07-06T23:26:36.221252156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.437151125s" Jul 6 23:26:36.221437 containerd[1994]: time="2025-07-06T23:26:36.221311976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 6 23:26:36.224603 containerd[1994]: time="2025-07-06T23:26:36.224505488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:26:36.230009 containerd[1994]: time="2025-07-06T23:26:36.229948352Z" level=info msg="CreateContainer within sandbox \"69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:26:36.244609 containerd[1994]: time="2025-07-06T23:26:36.241990604Z" level=info msg="Container 3386077a8d8b19acbabfbd2ab4f16d985846fbe49591622a5e745ecb16856dc9: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:26:36.262116 containerd[1994]: time="2025-07-06T23:26:36.262043876Z" level=info msg="CreateContainer within sandbox \"69c525a187cbd90d02084205a1e90f904aee68d38d6a97806bc8d14474988f76\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3386077a8d8b19acbabfbd2ab4f16d985846fbe49591622a5e745ecb16856dc9\"" Jul 6 23:26:36.265086 containerd[1994]: time="2025-07-06T23:26:36.264874448Z" level=info msg="StartContainer for \"3386077a8d8b19acbabfbd2ab4f16d985846fbe49591622a5e745ecb16856dc9\"" Jul 6 23:26:36.269061 containerd[1994]: time="2025-07-06T23:26:36.268995092Z" level=info msg="connecting to shim 3386077a8d8b19acbabfbd2ab4f16d985846fbe49591622a5e745ecb16856dc9" address="unix:///run/containerd/s/b71aab05ab523064d750b012b49bcdaeec318c5685b4fea30aacdfffb13c3bdc" protocol=ttrpc version=3 Jul 6 23:26:36.327036 systemd[1]: Started cri-containerd-3386077a8d8b19acbabfbd2ab4f16d985846fbe49591622a5e745ecb16856dc9.scope - libcontainer container 3386077a8d8b19acbabfbd2ab4f16d985846fbe49591622a5e745ecb16856dc9. Jul 6 23:26:36.429768 containerd[1994]: time="2025-07-06T23:26:36.429664641Z" level=info msg="StartContainer for \"3386077a8d8b19acbabfbd2ab4f16d985846fbe49591622a5e745ecb16856dc9\" returns successfully" Jul 6 23:26:36.575413 systemd[1]: Started sshd@12-172.31.21.233:22-147.75.109.163:39042.service - OpenSSH per-connection server daemon (147.75.109.163:39042). Jul 6 23:26:36.830464 sshd[5828]: Accepted publickey for core from 147.75.109.163 port 39042 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:26:36.834013 sshd-session[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:26:36.843274 systemd-logind[1972]: New session 13 of user core. Jul 6 23:26:36.848881 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:26:37.123804 sshd[5834]: Connection closed by 147.75.109.163 port 39042 Jul 6 23:26:37.124901 sshd-session[5828]: pam_unix(sshd:session): session closed for user core Jul 6 23:26:37.133536 systemd[1]: sshd@12-172.31.21.233:22-147.75.109.163:39042.service: Deactivated successfully. Jul 6 23:26:37.138012 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:26:37.140345 systemd-logind[1972]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:26:37.143897 systemd-logind[1972]: Removed session 13. Jul 6 23:26:37.866555 containerd[1994]: time="2025-07-06T23:26:37.866470176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:37.869105 containerd[1994]: time="2025-07-06T23:26:37.868790856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 6 23:26:37.870285 containerd[1994]: time="2025-07-06T23:26:37.869926644Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3386077a8d8b19acbabfbd2ab4f16d985846fbe49591622a5e745ecb16856dc9\" id:\"ecc5ca42bd3aa4664e40566d2068d3bd733b6a85f0b993a70927596597238608\" pid:5858 exit_status:1 exited_at:{seconds:1751844397 nanos:868545756}" Jul 6 23:26:37.872177 containerd[1994]: time="2025-07-06T23:26:37.872115672Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:37.880426 containerd[1994]: time="2025-07-06T23:26:37.878849760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:37.880426 containerd[1994]: time="2025-07-06T23:26:37.880216032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.655644976s" Jul 6 23:26:37.880426 containerd[1994]: time="2025-07-06T23:26:37.880274712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 6 23:26:37.895630 containerd[1994]: time="2025-07-06T23:26:37.895184424Z" level=info msg="CreateContainer within sandbox \"029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:26:37.921644 containerd[1994]: time="2025-07-06T23:26:37.921186181Z" level=info msg="Container 792f3de6f2c6abd21c88defa724385dd7a89251a0acb173b7776be3fc5d7b4f7: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:26:37.949746 containerd[1994]: time="2025-07-06T23:26:37.949640401Z" level=info msg="CreateContainer within sandbox \"029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"792f3de6f2c6abd21c88defa724385dd7a89251a0acb173b7776be3fc5d7b4f7\"" Jul 6 23:26:37.951474 containerd[1994]: time="2025-07-06T23:26:37.951365149Z" level=info msg="StartContainer for \"792f3de6f2c6abd21c88defa724385dd7a89251a0acb173b7776be3fc5d7b4f7\"" Jul 6 23:26:37.956506 containerd[1994]: time="2025-07-06T23:26:37.956235733Z" level=info msg="connecting to shim 792f3de6f2c6abd21c88defa724385dd7a89251a0acb173b7776be3fc5d7b4f7" address="unix:///run/containerd/s/251de9ee92aeda3362ffef59fb12e012a9397d656ec99dea098d20beea22023a" protocol=ttrpc version=3 Jul 6 23:26:38.010049 systemd[1]: Started cri-containerd-792f3de6f2c6abd21c88defa724385dd7a89251a0acb173b7776be3fc5d7b4f7.scope - libcontainer container 792f3de6f2c6abd21c88defa724385dd7a89251a0acb173b7776be3fc5d7b4f7. Jul 6 23:26:38.115680 containerd[1994]: time="2025-07-06T23:26:38.115531042Z" level=info msg="StartContainer for \"792f3de6f2c6abd21c88defa724385dd7a89251a0acb173b7776be3fc5d7b4f7\" returns successfully" Jul 6 23:26:38.123252 containerd[1994]: time="2025-07-06T23:26:38.121991590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:26:38.502083 containerd[1994]: time="2025-07-06T23:26:38.502002791Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3386077a8d8b19acbabfbd2ab4f16d985846fbe49591622a5e745ecb16856dc9\" id:\"4076cbcb7800fb2cd35b468bf728b84d72a003296fb115245aabb29767517c96\" pid:5918 exit_status:1 exited_at:{seconds:1751844398 nanos:501336167}" Jul 6 23:26:39.899593 containerd[1994]: time="2025-07-06T23:26:39.899477846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:39.902625 containerd[1994]: time="2025-07-06T23:26:39.902531666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 6 23:26:39.904668 containerd[1994]: time="2025-07-06T23:26:39.904461170Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:39.910341 containerd[1994]: time="2025-07-06T23:26:39.910112462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:26:39.912151 containerd[1994]: time="2025-07-06T23:26:39.911781710Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.788927596s" Jul 6 23:26:39.912151 containerd[1994]: time="2025-07-06T23:26:39.911877830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 6 23:26:39.921010 containerd[1994]: time="2025-07-06T23:26:39.920891151Z" level=info msg="CreateContainer within sandbox \"029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:26:39.944214 containerd[1994]: time="2025-07-06T23:26:39.944126211Z" level=info msg="Container 7590ecbc828d774329b14e17f07dedb5cf7f7bf74c82f34c0bc053577ff64f72: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:26:39.970826 containerd[1994]: time="2025-07-06T23:26:39.970758267Z" level=info msg="CreateContainer within sandbox \"029101b60ac40c661e2426533c857619e6b3430ee643af18830baacdbcc89084\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7590ecbc828d774329b14e17f07dedb5cf7f7bf74c82f34c0bc053577ff64f72\"" Jul 6 23:26:39.971896 containerd[1994]: time="2025-07-06T23:26:39.971797563Z" level=info msg="StartContainer for \"7590ecbc828d774329b14e17f07dedb5cf7f7bf74c82f34c0bc053577ff64f72\"" Jul 6 23:26:39.976034 containerd[1994]: time="2025-07-06T23:26:39.975941991Z" level=info msg="connecting to shim 7590ecbc828d774329b14e17f07dedb5cf7f7bf74c82f34c0bc053577ff64f72" address="unix:///run/containerd/s/251de9ee92aeda3362ffef59fb12e012a9397d656ec99dea098d20beea22023a" protocol=ttrpc version=3 Jul 6 23:26:40.031917 systemd[1]: Started cri-containerd-7590ecbc828d774329b14e17f07dedb5cf7f7bf74c82f34c0bc053577ff64f72.scope - libcontainer container 7590ecbc828d774329b14e17f07dedb5cf7f7bf74c82f34c0bc053577ff64f72. Jul 6 23:26:40.128860 containerd[1994]: time="2025-07-06T23:26:40.128699616Z" level=info msg="StartContainer for \"7590ecbc828d774329b14e17f07dedb5cf7f7bf74c82f34c0bc053577ff64f72\" returns successfully" Jul 6 23:26:40.366632 kubelet[3301]: I0706 23:26:40.366456 3301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-mb4p7" podStartSLOduration=32.013289696 podStartE2EDuration="48.366425269s" podCreationTimestamp="2025-07-06 23:25:52 +0000 UTC" firstStartedPulling="2025-07-06 23:26:19.869840707 +0000 UTC m=+57.647545139" lastFinishedPulling="2025-07-06 23:26:36.222976268 +0000 UTC m=+74.000680712" observedRunningTime="2025-07-06 23:26:37.333795118 +0000 UTC m=+75.111499586" watchObservedRunningTime="2025-07-06 23:26:40.366425269 +0000 UTC m=+78.144129713" Jul 6 23:26:40.701201 kubelet[3301]: I0706 23:26:40.701041 3301 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:26:40.701201 kubelet[3301]: I0706 23:26:40.701122 3301 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:26:42.164678 systemd[1]: Started sshd@13-172.31.21.233:22-147.75.109.163:39048.service - OpenSSH per-connection server daemon (147.75.109.163:39048). Jul 6 23:26:42.387983 sshd[5965]: Accepted publickey for core from 147.75.109.163 port 39048 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:26:42.393267 sshd-session[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:26:42.403709 systemd-logind[1972]: New session 14 of user core. Jul 6 23:26:42.410855 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:26:42.713293 sshd[5967]: Connection closed by 147.75.109.163 port 39048 Jul 6 23:26:42.714357 sshd-session[5965]: pam_unix(sshd:session): session closed for user core Jul 6 23:26:42.724810 systemd-logind[1972]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:26:42.726382 systemd[1]: sshd@13-172.31.21.233:22-147.75.109.163:39048.service: Deactivated successfully. Jul 6 23:26:42.732788 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:26:42.738861 systemd-logind[1972]: Removed session 14. Jul 6 23:26:47.753935 systemd[1]: Started sshd@14-172.31.21.233:22-147.75.109.163:55866.service - OpenSSH per-connection server daemon (147.75.109.163:55866). Jul 6 23:26:47.950522 sshd[5991]: Accepted publickey for core from 147.75.109.163 port 55866 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:26:47.953096 sshd-session[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:26:47.963175 systemd-logind[1972]: New session 15 of user core. Jul 6 23:26:47.970890 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:26:48.238884 sshd[5993]: Connection closed by 147.75.109.163 port 55866 Jul 6 23:26:48.239780 sshd-session[5991]: pam_unix(sshd:session): session closed for user core Jul 6 23:26:48.248447 systemd[1]: sshd@14-172.31.21.233:22-147.75.109.163:55866.service: Deactivated successfully. Jul 6 23:26:48.254462 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:26:48.256918 systemd-logind[1972]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:26:48.260840 systemd-logind[1972]: Removed session 15. Jul 6 23:26:52.017626 kubelet[3301]: I0706 23:26:52.017300 3301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:26:52.060993 kubelet[3301]: I0706 23:26:52.060838 3301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mhl8x" podStartSLOduration=40.138483821 podStartE2EDuration="1m0.060811079s" podCreationTimestamp="2025-07-06 23:25:52 +0000 UTC" firstStartedPulling="2025-07-06 23:26:19.99155288 +0000 UTC m=+57.769257312" lastFinishedPulling="2025-07-06 23:26:39.913880126 +0000 UTC m=+77.691584570" observedRunningTime="2025-07-06 23:26:40.371717989 +0000 UTC m=+78.149422529" watchObservedRunningTime="2025-07-06 23:26:52.060811079 +0000 UTC m=+89.838515547" Jul 6 23:26:52.178619 containerd[1994]: time="2025-07-06T23:26:52.178275359Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fca8f6215dcf77b287ab4332b600545820b917f25002d888153d5f5aa57f16e3\" id:\"b3515ab614196494c1cf41ae801d76246ce12e9f28f639a4359eb812022caab9\" pid:6019 exited_at:{seconds:1751844412 nanos:176953811}" Jul 6 23:26:53.279088 systemd[1]: Started sshd@15-172.31.21.233:22-147.75.109.163:55882.service - OpenSSH per-connection server daemon (147.75.109.163:55882). Jul 6 23:26:53.507523 sshd[6034]: Accepted publickey for core from 147.75.109.163 port 55882 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:26:53.510488 sshd-session[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:26:53.523645 systemd-logind[1972]: New session 16 of user core. Jul 6 23:26:53.530116 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:26:53.933474 sshd[6036]: Connection closed by 147.75.109.163 port 55882 Jul 6 23:26:53.932448 sshd-session[6034]: pam_unix(sshd:session): session closed for user core Jul 6 23:26:53.944537 systemd[1]: sshd@15-172.31.21.233:22-147.75.109.163:55882.service: Deactivated successfully. Jul 6 23:26:53.952530 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:26:53.959057 systemd-logind[1972]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:26:53.978015 systemd[1]: Started sshd@16-172.31.21.233:22-147.75.109.163:55896.service - OpenSSH per-connection server daemon (147.75.109.163:55896). Jul 6 23:26:53.984161 systemd-logind[1972]: Removed session 16. Jul 6 23:26:54.184989 sshd[6048]: Accepted publickey for core from 147.75.109.163 port 55896 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:26:54.187825 sshd-session[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:26:54.204828 systemd-logind[1972]: New session 17 of user core. Jul 6 23:26:54.211871 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:26:54.969858 sshd[6050]: Connection closed by 147.75.109.163 port 55896 Jul 6 23:26:54.971323 sshd-session[6048]: pam_unix(sshd:session): session closed for user core Jul 6 23:26:54.980010 systemd-logind[1972]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:26:54.980811 systemd[1]: sshd@16-172.31.21.233:22-147.75.109.163:55896.service: Deactivated successfully. Jul 6 23:26:54.990775 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:26:55.015532 systemd-logind[1972]: Removed session 17. Jul 6 23:26:55.019281 systemd[1]: Started sshd@17-172.31.21.233:22-147.75.109.163:55912.service - OpenSSH per-connection server daemon (147.75.109.163:55912). Jul 6 23:26:55.264240 sshd[6060]: Accepted publickey for core from 147.75.109.163 port 55912 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:26:55.268494 sshd-session[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:26:55.281679 systemd-logind[1972]: New session 18 of user core. Jul 6 23:26:55.291874 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:26:56.935730 sshd[6062]: Connection closed by 147.75.109.163 port 55912 Jul 6 23:26:56.934732 sshd-session[6060]: pam_unix(sshd:session): session closed for user core Jul 6 23:26:56.944821 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:26:56.947541 systemd[1]: sshd@17-172.31.21.233:22-147.75.109.163:55912.service: Deactivated successfully. Jul 6 23:26:56.962058 systemd-logind[1972]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:26:56.985682 systemd[1]: Started sshd@18-172.31.21.233:22-147.75.109.163:58304.service - OpenSSH per-connection server daemon (147.75.109.163:58304). Jul 6 23:26:56.988513 systemd-logind[1972]: Removed session 18. Jul 6 23:26:57.213407 sshd[6078]: Accepted publickey for core from 147.75.109.163 port 58304 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:26:57.216708 sshd-session[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:26:57.230174 systemd-logind[1972]: New session 19 of user core. Jul 6 23:26:57.239933 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:26:57.966923 sshd[6082]: Connection closed by 147.75.109.163 port 58304 Jul 6 23:26:57.967811 sshd-session[6078]: pam_unix(sshd:session): session closed for user core Jul 6 23:26:57.980431 systemd[1]: sshd@18-172.31.21.233:22-147.75.109.163:58304.service: Deactivated successfully. Jul 6 23:26:57.993969 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:26:58.000929 systemd-logind[1972]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:26:58.031928 systemd[1]: Started sshd@19-172.31.21.233:22-147.75.109.163:58310.service - OpenSSH per-connection server daemon (147.75.109.163:58310). Jul 6 23:26:58.037030 systemd-logind[1972]: Removed session 19. Jul 6 23:26:58.252542 sshd[6092]: Accepted publickey for core from 147.75.109.163 port 58310 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:26:58.257678 sshd-session[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:26:58.272082 systemd-logind[1972]: New session 20 of user core. Jul 6 23:26:58.280330 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:26:58.655852 sshd[6094]: Connection closed by 147.75.109.163 port 58310 Jul 6 23:26:58.656900 sshd-session[6092]: pam_unix(sshd:session): session closed for user core Jul 6 23:26:58.669994 systemd[1]: sshd@19-172.31.21.233:22-147.75.109.163:58310.service: Deactivated successfully. Jul 6 23:26:58.676266 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:26:58.681938 systemd-logind[1972]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:26:58.693098 systemd-logind[1972]: Removed session 20. Jul 6 23:26:59.340024 containerd[1994]: time="2025-07-06T23:26:59.339921739Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16bcfe9057030d25cae92232d2261d19a7a34bb6fb5ab5f1cc66e58d8db7ee2a\" id:\"4c79157f497802520c854e9767ddc19f19792a64229e747c6452b0750cb2c843\" pid:6118 exited_at:{seconds:1751844419 nanos:338326651}" Jul 6 23:27:03.695058 systemd[1]: Started sshd@20-172.31.21.233:22-147.75.109.163:58324.service - OpenSSH per-connection server daemon (147.75.109.163:58324). Jul 6 23:27:03.898180 sshd[6138]: Accepted publickey for core from 147.75.109.163 port 58324 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:27:03.905066 sshd-session[6138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:03.921656 systemd-logind[1972]: New session 21 of user core. Jul 6 23:27:03.934963 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:27:04.235619 sshd[6140]: Connection closed by 147.75.109.163 port 58324 Jul 6 23:27:04.233084 sshd-session[6138]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:04.241089 systemd[1]: sshd@20-172.31.21.233:22-147.75.109.163:58324.service: Deactivated successfully. Jul 6 23:27:04.247423 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:27:04.251147 systemd-logind[1972]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:27:04.257164 systemd-logind[1972]: Removed session 21. Jul 6 23:27:08.530639 containerd[1994]: time="2025-07-06T23:27:08.530322173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3386077a8d8b19acbabfbd2ab4f16d985846fbe49591622a5e745ecb16856dc9\" id:\"f450a17d12ca1c04cae2eaed99694ed7725573a00eb1f65a2a6ac1a19ddf7e5e\" pid:6167 exited_at:{seconds:1751844428 nanos:529651325}" Jul 6 23:27:09.274471 systemd[1]: Started sshd@21-172.31.21.233:22-147.75.109.163:44210.service - OpenSSH per-connection server daemon (147.75.109.163:44210). Jul 6 23:27:09.513606 sshd[6180]: Accepted publickey for core from 147.75.109.163 port 44210 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:27:09.521118 sshd-session[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:09.541676 systemd-logind[1972]: New session 22 of user core. Jul 6 23:27:09.547202 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:27:09.825945 sshd[6182]: Connection closed by 147.75.109.163 port 44210 Jul 6 23:27:09.827090 sshd-session[6180]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:09.838333 systemd[1]: sshd@21-172.31.21.233:22-147.75.109.163:44210.service: Deactivated successfully. Jul 6 23:27:09.846138 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:27:09.849775 systemd-logind[1972]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:27:09.854286 systemd-logind[1972]: Removed session 22. Jul 6 23:27:14.281301 containerd[1994]: time="2025-07-06T23:27:14.281230005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3386077a8d8b19acbabfbd2ab4f16d985846fbe49591622a5e745ecb16856dc9\" id:\"dbf83d817352878de598decb4d6050222e2edf4ae0b3ab7a2586fb77f083f739\" pid:6207 exited_at:{seconds:1751844434 nanos:280564917}" Jul 6 23:27:14.872159 systemd[1]: Started sshd@22-172.31.21.233:22-147.75.109.163:44220.service - OpenSSH per-connection server daemon (147.75.109.163:44220). Jul 6 23:27:15.086313 sshd[6218]: Accepted publickey for core from 147.75.109.163 port 44220 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:27:15.089958 sshd-session[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:15.104710 systemd-logind[1972]: New session 23 of user core. Jul 6 23:27:15.111103 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:27:15.418614 sshd[6220]: Connection closed by 147.75.109.163 port 44220 Jul 6 23:27:15.418199 sshd-session[6218]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:15.427385 systemd-logind[1972]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:27:15.429428 systemd[1]: sshd@22-172.31.21.233:22-147.75.109.163:44220.service: Deactivated successfully. Jul 6 23:27:15.436995 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:27:15.442330 systemd-logind[1972]: Removed session 23. Jul 6 23:27:20.466790 systemd[1]: Started sshd@23-172.31.21.233:22-147.75.109.163:42300.service - OpenSSH per-connection server daemon (147.75.109.163:42300). Jul 6 23:27:20.714348 sshd[6234]: Accepted publickey for core from 147.75.109.163 port 42300 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:27:20.717289 sshd-session[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:20.728963 systemd-logind[1972]: New session 24 of user core. Jul 6 23:27:20.734886 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:27:21.024702 sshd[6236]: Connection closed by 147.75.109.163 port 42300 Jul 6 23:27:21.024408 sshd-session[6234]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:21.032678 systemd[1]: sshd@23-172.31.21.233:22-147.75.109.163:42300.service: Deactivated successfully. Jul 6 23:27:21.039264 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:27:21.043330 systemd-logind[1972]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:27:21.049004 systemd-logind[1972]: Removed session 24. Jul 6 23:27:21.463440 containerd[1994]: time="2025-07-06T23:27:21.463376753Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16bcfe9057030d25cae92232d2261d19a7a34bb6fb5ab5f1cc66e58d8db7ee2a\" id:\"3171df0996b42354953b85c103a0b68258d8f4fe8ca35fc2beaf07af00cea762\" pid:6262 exited_at:{seconds:1751844441 nanos:462487325}" Jul 6 23:27:22.326523 containerd[1994]: time="2025-07-06T23:27:22.326257121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fca8f6215dcf77b287ab4332b600545820b917f25002d888153d5f5aa57f16e3\" id:\"de37065b97380a20c6717622774a44b767d1affd30fc11e7bab6fae48c342741\" pid:6282 exited_at:{seconds:1751844442 nanos:325551317}" Jul 6 23:27:26.065444 systemd[1]: Started sshd@24-172.31.21.233:22-147.75.109.163:51752.service - OpenSSH per-connection server daemon (147.75.109.163:51752). Jul 6 23:27:26.278975 sshd[6296]: Accepted publickey for core from 147.75.109.163 port 51752 ssh2: RSA SHA256:eOJ3LHeNdRQ+5gU5LinjA9Wjsmlxw0mUThdlmU3tG3Y Jul 6 23:27:26.281864 sshd-session[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:26.293104 systemd-logind[1972]: New session 25 of user core. Jul 6 23:27:26.303868 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:27:26.615960 sshd[6298]: Connection closed by 147.75.109.163 port 51752 Jul 6 23:27:26.616896 sshd-session[6296]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:26.626505 systemd[1]: sshd@24-172.31.21.233:22-147.75.109.163:51752.service: Deactivated successfully. Jul 6 23:27:26.637248 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:27:26.643326 systemd-logind[1972]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:27:26.647469 systemd-logind[1972]: Removed session 25. Jul 6 23:27:29.300510 containerd[1994]: time="2025-07-06T23:27:29.300359880Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16bcfe9057030d25cae92232d2261d19a7a34bb6fb5ab5f1cc66e58d8db7ee2a\" id:\"c0fd51e8f01ccc3b4c931c8509124936f171c6ae8eb79d0fc89ab88a529d7a68\" pid:6322 exited_at:{seconds:1751844449 nanos:299172780}" Jul 6 23:27:38.447818 containerd[1994]: time="2025-07-06T23:27:38.447753189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3386077a8d8b19acbabfbd2ab4f16d985846fbe49591622a5e745ecb16856dc9\" id:\"c92614c769e7d0e981f962f5992cd5be0326da9aaee2be511f30920d862446bd\" pid:6346 exited_at:{seconds:1751844458 nanos:447225501}" Jul 6 23:27:39.999448 systemd[1]: cri-containerd-0fcf5ead3ac88c20af1c48efa251570bbe39d82f303904f3eb8e8409bdfca3e2.scope: Deactivated successfully. Jul 6 23:27:40.001812 systemd[1]: cri-containerd-0fcf5ead3ac88c20af1c48efa251570bbe39d82f303904f3eb8e8409bdfca3e2.scope: Consumed 6.238s CPU time, 59.5M memory peak, 128K read from disk. Jul 6 23:27:40.009536 containerd[1994]: time="2025-07-06T23:27:40.009468009Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0fcf5ead3ac88c20af1c48efa251570bbe39d82f303904f3eb8e8409bdfca3e2\" id:\"0fcf5ead3ac88c20af1c48efa251570bbe39d82f303904f3eb8e8409bdfca3e2\" pid:3132 exit_status:1 exited_at:{seconds:1751844460 nanos:8502633}" Jul 6 23:27:40.031215 containerd[1994]: time="2025-07-06T23:27:40.031141221Z" level=info msg="received exit event container_id:\"0fcf5ead3ac88c20af1c48efa251570bbe39d82f303904f3eb8e8409bdfca3e2\" id:\"0fcf5ead3ac88c20af1c48efa251570bbe39d82f303904f3eb8e8409bdfca3e2\" pid:3132 exit_status:1 exited_at:{seconds:1751844460 nanos:8502633}" Jul 6 23:27:40.084843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fcf5ead3ac88c20af1c48efa251570bbe39d82f303904f3eb8e8409bdfca3e2-rootfs.mount: Deactivated successfully. Jul 6 23:27:40.477806 systemd[1]: cri-containerd-43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849.scope: Deactivated successfully. Jul 6 23:27:40.480304 systemd[1]: cri-containerd-43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849.scope: Consumed 26.441s CPU time, 103.1M memory peak, 812K read from disk. Jul 6 23:27:40.487110 containerd[1994]: time="2025-07-06T23:27:40.487049903Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849\" id:\"43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849\" pid:3810 exit_status:1 exited_at:{seconds:1751844460 nanos:486337763}" Jul 6 23:27:40.487525 containerd[1994]: time="2025-07-06T23:27:40.487172903Z" level=info msg="received exit event container_id:\"43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849\" id:\"43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849\" pid:3810 exit_status:1 exited_at:{seconds:1751844460 nanos:486337763}" Jul 6 23:27:40.529346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849-rootfs.mount: Deactivated successfully. Jul 6 23:27:40.596801 kubelet[3301]: I0706 23:27:40.596669 3301 scope.go:117] "RemoveContainer" containerID="0fcf5ead3ac88c20af1c48efa251570bbe39d82f303904f3eb8e8409bdfca3e2" Jul 6 23:27:40.604034 kubelet[3301]: I0706 23:27:40.603990 3301 scope.go:117] "RemoveContainer" containerID="43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849" Jul 6 23:27:40.604474 containerd[1994]: time="2025-07-06T23:27:40.604355832Z" level=info msg="CreateContainer within sandbox \"02d59afca67265e3afcaddd2c87da8c80e6b809b839647755dd5321bac1d2fe9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 6 23:27:40.609875 containerd[1994]: time="2025-07-06T23:27:40.609739608Z" level=info msg="CreateContainer within sandbox \"862da268c2c60297d1df124926ca2d1527b7babce2834ec0ef492881ada8cc0a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 6 23:27:40.638943 containerd[1994]: time="2025-07-06T23:27:40.638856492Z" level=info msg="Container 756c89f2f5d3d1925380825601bbacd959995a4a224c5c7121c8ad023bde483e: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:27:40.646826 containerd[1994]: time="2025-07-06T23:27:40.645508788Z" level=info msg="Container 63dddf55a525ab43da5f33128c75a02fdc61acb58034d29bad33ec5abcc9fbc0: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:27:40.651794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679914502.mount: Deactivated successfully. Jul 6 23:27:40.676462 containerd[1994]: time="2025-07-06T23:27:40.675861276Z" level=info msg="CreateContainer within sandbox \"862da268c2c60297d1df124926ca2d1527b7babce2834ec0ef492881ada8cc0a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"63dddf55a525ab43da5f33128c75a02fdc61acb58034d29bad33ec5abcc9fbc0\"" Jul 6 23:27:40.677620 containerd[1994]: time="2025-07-06T23:27:40.676916484Z" level=info msg="StartContainer for \"63dddf55a525ab43da5f33128c75a02fdc61acb58034d29bad33ec5abcc9fbc0\"" Jul 6 23:27:40.679567 containerd[1994]: time="2025-07-06T23:27:40.679475616Z" level=info msg="CreateContainer within sandbox \"02d59afca67265e3afcaddd2c87da8c80e6b809b839647755dd5321bac1d2fe9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"756c89f2f5d3d1925380825601bbacd959995a4a224c5c7121c8ad023bde483e\"" Jul 6 23:27:40.680492 containerd[1994]: time="2025-07-06T23:27:40.680430720Z" level=info msg="StartContainer for \"756c89f2f5d3d1925380825601bbacd959995a4a224c5c7121c8ad023bde483e\"" Jul 6 23:27:40.682843 containerd[1994]: time="2025-07-06T23:27:40.682775376Z" level=info msg="connecting to shim 756c89f2f5d3d1925380825601bbacd959995a4a224c5c7121c8ad023bde483e" address="unix:///run/containerd/s/31df8e4806a0506154b62fbdbe0b20adfabb5d4a94962b2fff2e935b9846e5bb" protocol=ttrpc version=3 Jul 6 23:27:40.700944 containerd[1994]: time="2025-07-06T23:27:40.700834224Z" level=info msg="connecting to shim 63dddf55a525ab43da5f33128c75a02fdc61acb58034d29bad33ec5abcc9fbc0" address="unix:///run/containerd/s/3d6318f0a40fddb509cbb898205566b755f88c8b8ebee98a51f9c101281aaf8b" protocol=ttrpc version=3 Jul 6 23:27:40.722914 systemd[1]: Started cri-containerd-756c89f2f5d3d1925380825601bbacd959995a4a224c5c7121c8ad023bde483e.scope - libcontainer container 756c89f2f5d3d1925380825601bbacd959995a4a224c5c7121c8ad023bde483e. Jul 6 23:27:40.751889 systemd[1]: Started cri-containerd-63dddf55a525ab43da5f33128c75a02fdc61acb58034d29bad33ec5abcc9fbc0.scope - libcontainer container 63dddf55a525ab43da5f33128c75a02fdc61acb58034d29bad33ec5abcc9fbc0. Jul 6 23:27:40.865650 containerd[1994]: time="2025-07-06T23:27:40.863517445Z" level=info msg="StartContainer for \"756c89f2f5d3d1925380825601bbacd959995a4a224c5c7121c8ad023bde483e\" returns successfully" Jul 6 23:27:40.869159 containerd[1994]: time="2025-07-06T23:27:40.869082289Z" level=info msg="StartContainer for \"63dddf55a525ab43da5f33128c75a02fdc61acb58034d29bad33ec5abcc9fbc0\" returns successfully" Jul 6 23:27:44.814371 kubelet[3301]: E0706 23:27:44.814300 3301 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-233?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 6 23:27:46.792734 systemd[1]: cri-containerd-4dfbdd937f9bedfd0fa02823ec4789e27a06b0fb0ea1826db8a042584f6b9424.scope: Deactivated successfully. Jul 6 23:27:46.793544 systemd[1]: cri-containerd-4dfbdd937f9bedfd0fa02823ec4789e27a06b0fb0ea1826db8a042584f6b9424.scope: Consumed 3.965s CPU time, 21.9M memory peak, 192K read from disk. Jul 6 23:27:46.799290 containerd[1994]: time="2025-07-06T23:27:46.799200247Z" level=info msg="received exit event container_id:\"4dfbdd937f9bedfd0fa02823ec4789e27a06b0fb0ea1826db8a042584f6b9424\" id:\"4dfbdd937f9bedfd0fa02823ec4789e27a06b0fb0ea1826db8a042584f6b9424\" pid:3139 exit_status:1 exited_at:{seconds:1751844466 nanos:797764087}" Jul 6 23:27:46.800495 containerd[1994]: time="2025-07-06T23:27:46.799219879Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4dfbdd937f9bedfd0fa02823ec4789e27a06b0fb0ea1826db8a042584f6b9424\" id:\"4dfbdd937f9bedfd0fa02823ec4789e27a06b0fb0ea1826db8a042584f6b9424\" pid:3139 exit_status:1 exited_at:{seconds:1751844466 nanos:797764087}" Jul 6 23:27:46.849115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dfbdd937f9bedfd0fa02823ec4789e27a06b0fb0ea1826db8a042584f6b9424-rootfs.mount: Deactivated successfully. Jul 6 23:27:47.656926 kubelet[3301]: I0706 23:27:47.656814 3301 scope.go:117] "RemoveContainer" containerID="4dfbdd937f9bedfd0fa02823ec4789e27a06b0fb0ea1826db8a042584f6b9424" Jul 6 23:27:47.676707 containerd[1994]: time="2025-07-06T23:27:47.676609951Z" level=info msg="CreateContainer within sandbox \"5983888cc377f765dac5d3ff4a0f4d979b75e3f8668baa717a1c50acba2576b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 6 23:27:47.705683 containerd[1994]: time="2025-07-06T23:27:47.704922403Z" level=info msg="Container 21d46799be49293a582eab0d427ea0ff6dab980284f5d2a70dc1921f42a479ca: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:27:47.722246 containerd[1994]: time="2025-07-06T23:27:47.722165587Z" level=info msg="CreateContainer within sandbox \"5983888cc377f765dac5d3ff4a0f4d979b75e3f8668baa717a1c50acba2576b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"21d46799be49293a582eab0d427ea0ff6dab980284f5d2a70dc1921f42a479ca\"" Jul 6 23:27:47.722934 containerd[1994]: time="2025-07-06T23:27:47.722846851Z" level=info msg="StartContainer for \"21d46799be49293a582eab0d427ea0ff6dab980284f5d2a70dc1921f42a479ca\"" Jul 6 23:27:47.724992 containerd[1994]: time="2025-07-06T23:27:47.724921435Z" level=info msg="connecting to shim 21d46799be49293a582eab0d427ea0ff6dab980284f5d2a70dc1921f42a479ca" address="unix:///run/containerd/s/323055f57fcccd431aca0d224d5bc393329f18c53d06eb05fb0863d88219aaeb" protocol=ttrpc version=3 Jul 6 23:27:47.779929 systemd[1]: Started cri-containerd-21d46799be49293a582eab0d427ea0ff6dab980284f5d2a70dc1921f42a479ca.scope - libcontainer container 21d46799be49293a582eab0d427ea0ff6dab980284f5d2a70dc1921f42a479ca. Jul 6 23:27:47.863644 containerd[1994]: time="2025-07-06T23:27:47.863513948Z" level=info msg="StartContainer for \"21d46799be49293a582eab0d427ea0ff6dab980284f5d2a70dc1921f42a479ca\" returns successfully" Jul 6 23:27:52.104260 containerd[1994]: time="2025-07-06T23:27:52.104196921Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fca8f6215dcf77b287ab4332b600545820b917f25002d888153d5f5aa57f16e3\" id:\"0396476be871ae30fbaef79ce18bfd80473e60313a086d758727b6f8ea32b27c\" pid:6526 exited_at:{seconds:1751844472 nanos:103556757}" Jul 6 23:27:52.361193 systemd[1]: cri-containerd-63dddf55a525ab43da5f33128c75a02fdc61acb58034d29bad33ec5abcc9fbc0.scope: Deactivated successfully. Jul 6 23:27:52.365803 containerd[1994]: time="2025-07-06T23:27:52.365667574Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63dddf55a525ab43da5f33128c75a02fdc61acb58034d29bad33ec5abcc9fbc0\" id:\"63dddf55a525ab43da5f33128c75a02fdc61acb58034d29bad33ec5abcc9fbc0\" pid:6415 exit_status:1 exited_at:{seconds:1751844472 nanos:365005558}" Jul 6 23:27:52.366126 containerd[1994]: time="2025-07-06T23:27:52.365896546Z" level=info msg="received exit event container_id:\"63dddf55a525ab43da5f33128c75a02fdc61acb58034d29bad33ec5abcc9fbc0\" id:\"63dddf55a525ab43da5f33128c75a02fdc61acb58034d29bad33ec5abcc9fbc0\" pid:6415 exit_status:1 exited_at:{seconds:1751844472 nanos:365005558}" Jul 6 23:27:52.407999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63dddf55a525ab43da5f33128c75a02fdc61acb58034d29bad33ec5abcc9fbc0-rootfs.mount: Deactivated successfully. Jul 6 23:27:52.684688 kubelet[3301]: I0706 23:27:52.683541 3301 scope.go:117] "RemoveContainer" containerID="43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849" Jul 6 23:27:52.684688 kubelet[3301]: I0706 23:27:52.684009 3301 scope.go:117] "RemoveContainer" containerID="63dddf55a525ab43da5f33128c75a02fdc61acb58034d29bad33ec5abcc9fbc0" Jul 6 23:27:52.684688 kubelet[3301]: E0706 23:27:52.684235 3301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-6fn94_tigera-operator(904b00c6-423c-4959-9b02-8acb8e345f83)\"" pod="tigera-operator/tigera-operator-747864d56d-6fn94" podUID="904b00c6-423c-4959-9b02-8acb8e345f83" Jul 6 23:27:52.689494 containerd[1994]: time="2025-07-06T23:27:52.689397612Z" level=info msg="RemoveContainer for \"43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849\"" Jul 6 23:27:52.701224 containerd[1994]: time="2025-07-06T23:27:52.701142012Z" level=info msg="RemoveContainer for \"43d31ea2e1e0b03c6644dbacb804dd921b13985ecd218ff82bdd0911932fa849\" returns successfully" Jul 6 23:27:54.815295 kubelet[3301]: E0706 23:27:54.814927 3301 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-233?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 6 23:27:59.284268 containerd[1994]: time="2025-07-06T23:27:59.284200373Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16bcfe9057030d25cae92232d2261d19a7a34bb6fb5ab5f1cc66e58d8db7ee2a\" id:\"01d58eae86f95ce82bb2446870ef090d9af34b8975f364dd4b8481ab58005189\" pid:6571 exit_status:1 exited_at:{seconds:1751844479 nanos:283053329}"