Jul 6 23:27:05.120839 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 6 23:27:05.120885 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:57:11 -00 2025 Jul 6 23:27:05.120909 kernel: KASLR disabled due to lack of seed Jul 6 23:27:05.120926 kernel: efi: EFI v2.7 by EDK II Jul 6 23:27:05.120941 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Jul 6 23:27:05.120956 kernel: secureboot: Secure boot disabled Jul 6 23:27:05.120973 kernel: ACPI: Early table checksum verification disabled Jul 6 23:27:05.120988 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 6 23:27:05.121003 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 6 23:27:05.121017 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 6 23:27:05.121032 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 6 23:27:05.121051 kernel: ACPI: FACS 0x0000000078630000 000040 Jul 6 23:27:05.121066 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 6 23:27:05.121081 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 6 23:27:05.121098 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 6 23:27:05.121114 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 6 23:27:05.121133 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 6 23:27:05.121149 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 6 23:27:05.121165 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 6 23:27:05.121180 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 6 23:27:05.121196 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 6 23:27:05.121211 kernel: printk: legacy bootconsole [uart0] enabled Jul 6 23:27:05.121227 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 6 23:27:05.121242 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 6 23:27:05.121258 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Jul 6 23:27:05.121348 kernel: Zone ranges: Jul 6 23:27:05.121370 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 6 23:27:05.121393 kernel: DMA32 empty Jul 6 23:27:05.121409 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 6 23:27:05.121425 kernel: Device empty Jul 6 23:27:05.121440 kernel: Movable zone start for each node Jul 6 23:27:05.121455 kernel: Early memory node ranges Jul 6 23:27:05.121471 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 6 23:27:05.121486 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 6 23:27:05.121501 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 6 23:27:05.121517 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 6 23:27:05.121532 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 6 23:27:05.121547 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 6 23:27:05.121563 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 6 23:27:05.121583 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 6 23:27:05.121605 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 6 23:27:05.121621 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 6 23:27:05.121638 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jul 6 23:27:05.121654 kernel: psci: probing for conduit method from ACPI. Jul 6 23:27:05.121674 kernel: psci: PSCIv1.0 detected in firmware. Jul 6 23:27:05.121690 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:27:05.121706 kernel: psci: Trusted OS migration not required Jul 6 23:27:05.121722 kernel: psci: SMC Calling Convention v1.1 Jul 6 23:27:05.121738 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 6 23:27:05.121754 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 6 23:27:05.121770 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 6 23:27:05.121787 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 6 23:27:05.121804 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:27:05.121820 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:27:05.121836 kernel: CPU features: detected: Spectre-v2 Jul 6 23:27:05.121856 kernel: CPU features: detected: Spectre-v3a Jul 6 23:27:05.121872 kernel: CPU features: detected: Spectre-BHB Jul 6 23:27:05.121889 kernel: CPU features: detected: ARM erratum 1742098 Jul 6 23:27:05.121905 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 6 23:27:05.121921 kernel: alternatives: applying boot alternatives Jul 6 23:27:05.121939 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:27:05.121957 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:27:05.121973 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:27:05.121989 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:27:05.122006 kernel: Fallback order for Node 0: 0 Jul 6 23:27:05.122026 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jul 6 23:27:05.122042 kernel: Policy zone: Normal Jul 6 23:27:05.122058 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:27:05.122075 kernel: software IO TLB: area num 2. Jul 6 23:27:05.122092 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Jul 6 23:27:05.122109 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:27:05.122126 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:27:05.122144 kernel: rcu: RCU event tracing is enabled. Jul 6 23:27:05.122161 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:27:05.122178 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:27:05.122195 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:27:05.122212 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:27:05.122232 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:27:05.122249 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:27:05.122266 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:27:05.122302 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:27:05.122320 kernel: GICv3: 96 SPIs implemented Jul 6 23:27:05.122336 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:27:05.122352 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:27:05.122369 kernel: GICv3: GICv3 features: 16 PPIs Jul 6 23:27:05.122386 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 6 23:27:05.122403 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 6 23:27:05.122418 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 6 23:27:05.122435 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:27:05.122457 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jul 6 23:27:05.122474 kernel: GICv3: using LPI property table @0x0000000400110000 Jul 6 23:27:05.122491 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 6 23:27:05.122508 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jul 6 23:27:05.122525 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:27:05.122542 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 6 23:27:05.122559 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 6 23:27:05.122577 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 6 23:27:05.122594 kernel: Console: colour dummy device 80x25 Jul 6 23:27:05.122611 kernel: printk: legacy console [tty1] enabled Jul 6 23:27:05.122629 kernel: ACPI: Core revision 20240827 Jul 6 23:27:05.122652 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 6 23:27:05.122670 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:27:05.122688 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 6 23:27:05.122705 kernel: landlock: Up and running. Jul 6 23:27:05.122722 kernel: SELinux: Initializing. Jul 6 23:27:05.122739 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:27:05.122756 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:27:05.122773 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:27:05.122791 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:27:05.122813 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 6 23:27:05.122831 kernel: Remapping and enabling EFI services. Jul 6 23:27:05.122848 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:27:05.122865 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:27:05.122883 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 6 23:27:05.122900 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jul 6 23:27:05.122917 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 6 23:27:05.122934 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:27:05.122952 kernel: SMP: Total of 2 processors activated. Jul 6 23:27:05.122983 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:27:05.123001 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:27:05.123023 kernel: CPU features: detected: 32-bit EL1 Support Jul 6 23:27:05.123041 kernel: CPU features: detected: CRC32 instructions Jul 6 23:27:05.123059 kernel: alternatives: applying system-wide alternatives Jul 6 23:27:05.123078 kernel: Memory: 3796516K/4030464K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 212600K reserved, 16384K cma-reserved) Jul 6 23:27:05.123096 kernel: devtmpfs: initialized Jul 6 23:27:05.123118 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:27:05.123137 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:27:05.123154 kernel: 16912 pages in range for non-PLT usage Jul 6 23:27:05.123172 kernel: 508432 pages in range for PLT usage Jul 6 23:27:05.123189 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:27:05.123206 kernel: SMBIOS 3.0.0 present. Jul 6 23:27:05.123223 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 6 23:27:05.123241 kernel: DMI: Memory slots populated: 0/0 Jul 6 23:27:05.123258 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:27:05.123312 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:27:05.123333 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:27:05.123352 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:27:05.123369 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:27:05.123387 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Jul 6 23:27:05.123404 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:27:05.123421 kernel: cpuidle: using governor menu Jul 6 23:27:05.123439 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:27:05.123456 kernel: ASID allocator initialised with 65536 entries Jul 6 23:27:05.123498 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:27:05.123517 kernel: Serial: AMBA PL011 UART driver Jul 6 23:27:05.123535 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:27:05.123552 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:27:05.123569 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:27:05.123587 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:27:05.123604 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:27:05.123621 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:27:05.123638 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:27:05.123660 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:27:05.123678 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:27:05.123695 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:27:05.123712 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:27:05.123729 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:27:05.123746 kernel: ACPI: Interpreter enabled Jul 6 23:27:05.123763 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:27:05.123781 kernel: ACPI: MCFG table detected, 1 entries Jul 6 23:27:05.123798 kernel: ACPI: CPU0 has been hot-added Jul 6 23:27:05.123820 kernel: ACPI: CPU1 has been hot-added Jul 6 23:27:05.123837 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 6 23:27:05.124119 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:27:05.124336 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 6 23:27:05.124533 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 6 23:27:05.124722 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 6 23:27:05.124909 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 6 23:27:05.124939 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 6 23:27:05.124957 kernel: acpiphp: Slot [1] registered Jul 6 23:27:05.124975 kernel: acpiphp: Slot [2] registered Jul 6 23:27:05.124992 kernel: acpiphp: Slot [3] registered Jul 6 23:27:05.125009 kernel: acpiphp: Slot [4] registered Jul 6 23:27:05.125026 kernel: acpiphp: Slot [5] registered Jul 6 23:27:05.125043 kernel: acpiphp: Slot [6] registered Jul 6 23:27:05.125060 kernel: acpiphp: Slot [7] registered Jul 6 23:27:05.125078 kernel: acpiphp: Slot [8] registered Jul 6 23:27:05.125095 kernel: acpiphp: Slot [9] registered Jul 6 23:27:05.125118 kernel: acpiphp: Slot [10] registered Jul 6 23:27:05.125135 kernel: acpiphp: Slot [11] registered Jul 6 23:27:05.125153 kernel: acpiphp: Slot [12] registered Jul 6 23:27:05.125171 kernel: acpiphp: Slot [13] registered Jul 6 23:27:05.125188 kernel: acpiphp: Slot [14] registered Jul 6 23:27:05.125236 kernel: acpiphp: Slot [15] registered Jul 6 23:27:05.125255 kernel: acpiphp: Slot [16] registered Jul 6 23:27:05.125299 kernel: acpiphp: Slot [17] registered Jul 6 23:27:05.125322 kernel: acpiphp: Slot [18] registered Jul 6 23:27:05.125346 kernel: acpiphp: Slot [19] registered Jul 6 23:27:05.125364 kernel: acpiphp: Slot [20] registered Jul 6 23:27:05.125381 kernel: acpiphp: Slot [21] registered Jul 6 23:27:05.125398 kernel: acpiphp: Slot [22] registered Jul 6 23:27:05.125415 kernel: acpiphp: Slot [23] registered Jul 6 23:27:05.125432 kernel: acpiphp: Slot [24] registered Jul 6 23:27:05.125449 kernel: acpiphp: Slot [25] registered Jul 6 23:27:05.125467 kernel: acpiphp: Slot [26] registered Jul 6 23:27:05.125484 kernel: acpiphp: Slot [27] registered Jul 6 23:27:05.125501 kernel: acpiphp: Slot [28] registered Jul 6 23:27:05.125522 kernel: acpiphp: Slot [29] registered Jul 6 23:27:05.125540 kernel: acpiphp: Slot [30] registered Jul 6 23:27:05.125557 kernel: acpiphp: Slot [31] registered Jul 6 23:27:05.125574 kernel: PCI host bridge to bus 0000:00 Jul 6 23:27:05.126437 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 6 23:27:05.126631 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 6 23:27:05.126807 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 6 23:27:05.126988 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 6 23:27:05.127220 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jul 6 23:27:05.130193 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jul 6 23:27:05.132527 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jul 6 23:27:05.132773 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jul 6 23:27:05.132965 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jul 6 23:27:05.133158 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:27:05.133436 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jul 6 23:27:05.133635 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jul 6 23:27:05.133828 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jul 6 23:27:05.134016 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jul 6 23:27:05.134231 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:27:05.134507 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Jul 6 23:27:05.134701 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Jul 6 23:27:05.134903 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Jul 6 23:27:05.135092 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Jul 6 23:27:05.137343 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Jul 6 23:27:05.137574 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 6 23:27:05.137747 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 6 23:27:05.137917 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 6 23:27:05.137949 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 6 23:27:05.137969 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 6 23:27:05.137987 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 6 23:27:05.138005 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 6 23:27:05.138022 kernel: iommu: Default domain type: Translated Jul 6 23:27:05.138040 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:27:05.138058 kernel: efivars: Registered efivars operations Jul 6 23:27:05.138075 kernel: vgaarb: loaded Jul 6 23:27:05.138092 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:27:05.138110 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:27:05.138132 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:27:05.138150 kernel: pnp: PnP ACPI init Jul 6 23:27:05.138376 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 6 23:27:05.138404 kernel: pnp: PnP ACPI: found 1 devices Jul 6 23:27:05.138423 kernel: NET: Registered PF_INET protocol family Jul 6 23:27:05.138441 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:27:05.138459 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:27:05.138476 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:27:05.138501 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:27:05.138518 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:27:05.138536 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:27:05.138554 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:27:05.138572 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:27:05.138589 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:27:05.138607 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:27:05.138624 kernel: kvm [1]: HYP mode not available Jul 6 23:27:05.138641 kernel: Initialise system trusted keyrings Jul 6 23:27:05.138663 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:27:05.138681 kernel: Key type asymmetric registered Jul 6 23:27:05.138698 kernel: Asymmetric key parser 'x509' registered Jul 6 23:27:05.138715 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 6 23:27:05.138733 kernel: io scheduler mq-deadline registered Jul 6 23:27:05.138750 kernel: io scheduler kyber registered Jul 6 23:27:05.138767 kernel: io scheduler bfq registered Jul 6 23:27:05.138975 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 6 23:27:05.139007 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 6 23:27:05.139026 kernel: ACPI: button: Power Button [PWRB] Jul 6 23:27:05.139044 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 6 23:27:05.139062 kernel: ACPI: button: Sleep Button [SLPB] Jul 6 23:27:05.139079 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:27:05.139098 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 6 23:27:05.141711 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 6 23:27:05.141748 kernel: printk: legacy console [ttyS0] disabled Jul 6 23:27:05.141766 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 6 23:27:05.141794 kernel: printk: legacy console [ttyS0] enabled Jul 6 23:27:05.141813 kernel: printk: legacy bootconsole [uart0] disabled Jul 6 23:27:05.141830 kernel: thunder_xcv, ver 1.0 Jul 6 23:27:05.141848 kernel: thunder_bgx, ver 1.0 Jul 6 23:27:05.141865 kernel: nicpf, ver 1.0 Jul 6 23:27:05.141882 kernel: nicvf, ver 1.0 Jul 6 23:27:05.142101 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:27:05.142309 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:27:04 UTC (1751844424) Jul 6 23:27:05.142342 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:27:05.142362 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jul 6 23:27:05.142379 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:27:05.142397 kernel: watchdog: NMI not fully supported Jul 6 23:27:05.142414 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:27:05.142431 kernel: Segment Routing with IPv6 Jul 6 23:27:05.142449 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:27:05.142466 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:27:05.142483 kernel: Key type dns_resolver registered Jul 6 23:27:05.142505 kernel: registered taskstats version 1 Jul 6 23:27:05.142523 kernel: Loading compiled-in X.509 certificates Jul 6 23:27:05.142540 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: f8c1d02496b1c3f2ac4a0c4b5b2a55d3dc0ca718' Jul 6 23:27:05.142557 kernel: Demotion targets for Node 0: null Jul 6 23:27:05.142575 kernel: Key type .fscrypt registered Jul 6 23:27:05.142592 kernel: Key type fscrypt-provisioning registered Jul 6 23:27:05.142609 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:27:05.142626 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:27:05.142643 kernel: ima: No architecture policies found Jul 6 23:27:05.142665 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:27:05.142682 kernel: clk: Disabling unused clocks Jul 6 23:27:05.142700 kernel: PM: genpd: Disabling unused power domains Jul 6 23:27:05.142717 kernel: Warning: unable to open an initial console. Jul 6 23:27:05.142735 kernel: Freeing unused kernel memory: 39488K Jul 6 23:27:05.142752 kernel: Run /init as init process Jul 6 23:27:05.142769 kernel: with arguments: Jul 6 23:27:05.142786 kernel: /init Jul 6 23:27:05.142803 kernel: with environment: Jul 6 23:27:05.142819 kernel: HOME=/ Jul 6 23:27:05.142841 kernel: TERM=linux Jul 6 23:27:05.142858 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:27:05.142877 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:27:05.142901 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:27:05.142921 systemd[1]: Detected virtualization amazon. Jul 6 23:27:05.142939 systemd[1]: Detected architecture arm64. Jul 6 23:27:05.142957 systemd[1]: Running in initrd. Jul 6 23:27:05.142980 systemd[1]: No hostname configured, using default hostname. Jul 6 23:27:05.143000 systemd[1]: Hostname set to . Jul 6 23:27:05.143019 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:27:05.143037 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:27:05.143056 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:27:05.143075 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:27:05.143096 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:27:05.143115 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:27:05.143139 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:27:05.143159 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:27:05.143180 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:27:05.143199 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:27:05.143219 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:27:05.143238 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:27:05.143257 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:27:05.143301 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:27:05.143324 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:27:05.143343 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:27:05.143362 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:27:05.143382 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:27:05.143401 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:27:05.143420 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:27:05.143439 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:27:05.143480 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:27:05.143503 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:27:05.143522 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:27:05.143541 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:27:05.143561 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:27:05.143579 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:27:05.143599 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 6 23:27:05.143618 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:27:05.143637 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:27:05.143662 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:27:05.143681 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:05.143700 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:27:05.143721 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:27:05.143744 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:27:05.143764 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:27:05.143819 systemd-journald[257]: Collecting audit messages is disabled. Jul 6 23:27:05.143865 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:27:05.143892 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:27:05.143925 kernel: Bridge firewalling registered Jul 6 23:27:05.143949 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:27:05.143969 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:27:05.143989 systemd-journald[257]: Journal started Jul 6 23:27:05.144030 systemd-journald[257]: Runtime Journal (/run/log/journal/ec21083854d5c4e2653eece1fb0a4fd5) is 8M, max 75.3M, 67.3M free. Jul 6 23:27:05.099941 systemd-modules-load[259]: Inserted module 'overlay' Jul 6 23:27:05.133182 systemd-modules-load[259]: Inserted module 'br_netfilter' Jul 6 23:27:05.156580 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:27:05.166499 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:27:05.173551 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:27:05.185364 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:05.194171 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:27:05.207177 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:27:05.227452 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:27:05.243235 systemd-tmpfiles[279]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 6 23:27:05.243382 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:27:05.260498 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:27:05.267323 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:27:05.284770 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:27:05.318484 dracut-cmdline[297]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:27:05.366825 systemd-resolved[299]: Positive Trust Anchors: Jul 6 23:27:05.366859 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:27:05.366922 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:27:05.481319 kernel: SCSI subsystem initialized Jul 6 23:27:05.489328 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:27:05.502521 kernel: iscsi: registered transport (tcp) Jul 6 23:27:05.523477 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:27:05.523550 kernel: QLogic iSCSI HBA Driver Jul 6 23:27:05.557453 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:27:05.597756 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:27:05.612900 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:27:05.647329 kernel: random: crng init done Jul 6 23:27:05.647036 systemd-resolved[299]: Defaulting to hostname 'linux'. Jul 6 23:27:05.651481 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:27:05.659735 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:27:05.711327 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:27:05.716536 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:27:05.801341 kernel: raid6: neonx8 gen() 6506 MB/s Jul 6 23:27:05.818310 kernel: raid6: neonx4 gen() 6505 MB/s Jul 6 23:27:05.836309 kernel: raid6: neonx2 gen() 5437 MB/s Jul 6 23:27:05.853311 kernel: raid6: neonx1 gen() 3930 MB/s Jul 6 23:27:05.870309 kernel: raid6: int64x8 gen() 3621 MB/s Jul 6 23:27:05.887311 kernel: raid6: int64x4 gen() 3703 MB/s Jul 6 23:27:05.904309 kernel: raid6: int64x2 gen() 3588 MB/s Jul 6 23:27:05.922269 kernel: raid6: int64x1 gen() 2758 MB/s Jul 6 23:27:05.922323 kernel: raid6: using algorithm neonx8 gen() 6506 MB/s Jul 6 23:27:05.940288 kernel: raid6: .... xor() 4721 MB/s, rmw enabled Jul 6 23:27:05.940322 kernel: raid6: using neon recovery algorithm Jul 6 23:27:05.948900 kernel: xor: measuring software checksum speed Jul 6 23:27:05.948959 kernel: 8regs : 12484 MB/sec Jul 6 23:27:05.950108 kernel: 32regs : 13044 MB/sec Jul 6 23:27:05.952387 kernel: arm64_neon : 8695 MB/sec Jul 6 23:27:05.952421 kernel: xor: using function: 32regs (13044 MB/sec) Jul 6 23:27:06.043325 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:27:06.054709 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:27:06.061515 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:27:06.103026 systemd-udevd[507]: Using default interface naming scheme 'v255'. Jul 6 23:27:06.113146 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:27:06.130584 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:27:06.183947 dracut-pre-trigger[517]: rd.md=0: removing MD RAID activation Jul 6 23:27:06.227502 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:27:06.232642 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:27:06.360987 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:27:06.372350 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:27:06.532311 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 6 23:27:06.542323 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 6 23:27:06.550592 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 6 23:27:06.550666 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 6 23:27:06.553107 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 6 23:27:06.557293 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 6 23:27:06.557610 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 6 23:27:06.557830 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:27:06.559898 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:27:06.562310 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:06.570319 kernel: GPT:9289727 != 16777215 Jul 6 23:27:06.570357 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:27:06.570382 kernel: GPT:9289727 != 16777215 Jul 6 23:27:06.570404 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:27:06.570427 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:27:06.570460 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:fb:bf:56:4e:cd Jul 6 23:27:06.575736 (udev-worker)[564]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:27:06.582319 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:06.595679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:06.605075 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:27:06.635357 kernel: nvme nvme0: using unchecked data buffer Jul 6 23:27:06.656372 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:06.802642 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 6 23:27:06.831477 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 6 23:27:06.834134 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:27:06.878767 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 6 23:27:06.899845 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 6 23:27:06.900738 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 6 23:27:06.904106 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:27:06.905483 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:27:06.907323 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:27:06.913480 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:27:06.935725 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:27:06.963337 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:27:06.964542 disk-uuid[687]: Primary Header is updated. Jul 6 23:27:06.964542 disk-uuid[687]: Secondary Entries is updated. Jul 6 23:27:06.964542 disk-uuid[687]: Secondary Header is updated. Jul 6 23:27:06.976434 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:27:07.999371 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:27:08.000975 disk-uuid[693]: The operation has completed successfully. Jul 6 23:27:08.008354 kernel: block device autoloading is deprecated and will be removed. Jul 6 23:27:08.202661 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:27:08.204342 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:27:08.287664 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:27:08.327639 sh[957]: Success Jul 6 23:27:08.355749 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:27:08.355835 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:27:08.357873 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 6 23:27:08.370315 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 6 23:27:08.473941 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:27:08.480413 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:27:08.498988 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:27:08.525089 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 6 23:27:08.525166 kernel: BTRFS: device fsid 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (980) Jul 6 23:27:08.529619 kernel: BTRFS info (device dm-0): first mount of filesystem 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d Jul 6 23:27:08.529684 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:27:08.529712 kernel: BTRFS info (device dm-0): using free-space-tree Jul 6 23:27:08.642652 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:27:08.646951 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:27:08.653844 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:27:08.659138 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:27:08.668474 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:27:08.712527 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1009) Jul 6 23:27:08.716516 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:27:08.716590 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:27:08.718162 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 6 23:27:08.734349 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:27:08.736451 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:27:08.740672 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:27:08.848592 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:27:08.857600 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:27:08.927082 systemd-networkd[1149]: lo: Link UP Jul 6 23:27:08.927572 systemd-networkd[1149]: lo: Gained carrier Jul 6 23:27:08.932087 systemd-networkd[1149]: Enumeration completed Jul 6 23:27:08.932406 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:27:08.933484 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:27:08.933491 systemd-networkd[1149]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:27:08.939404 systemd[1]: Reached target network.target - Network. Jul 6 23:27:08.952667 systemd-networkd[1149]: eth0: Link UP Jul 6 23:27:08.952679 systemd-networkd[1149]: eth0: Gained carrier Jul 6 23:27:08.952700 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:27:08.968433 systemd-networkd[1149]: eth0: DHCPv4 address 172.31.26.116/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 6 23:27:09.238318 ignition[1066]: Ignition 2.21.0 Jul 6 23:27:09.238346 ignition[1066]: Stage: fetch-offline Jul 6 23:27:09.240473 ignition[1066]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:09.240504 ignition[1066]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:09.246623 ignition[1066]: Ignition finished successfully Jul 6 23:27:09.249833 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:27:09.255453 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:27:09.293023 ignition[1161]: Ignition 2.21.0 Jul 6 23:27:09.293054 ignition[1161]: Stage: fetch Jul 6 23:27:09.293724 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:09.294093 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:09.295804 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:09.310564 ignition[1161]: PUT result: OK Jul 6 23:27:09.314877 ignition[1161]: parsed url from cmdline: "" Jul 6 23:27:09.315015 ignition[1161]: no config URL provided Jul 6 23:27:09.316632 ignition[1161]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:27:09.318919 ignition[1161]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:27:09.320849 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:09.324010 ignition[1161]: PUT result: OK Jul 6 23:27:09.324397 ignition[1161]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 6 23:27:09.327988 ignition[1161]: GET result: OK Jul 6 23:27:09.328171 ignition[1161]: parsing config with SHA512: 600175d53cbb111d2c3ac5f7ba977218dd9d0a948611953d9ffc0ccd2cbe0921e9131615f299b7cfcbce879a30572a3a11dd60bf74770f0a93fe96fa5f022399 Jul 6 23:27:09.342470 unknown[1161]: fetched base config from "system" Jul 6 23:27:09.342499 unknown[1161]: fetched base config from "system" Jul 6 23:27:09.342512 unknown[1161]: fetched user config from "aws" Jul 6 23:27:09.346233 ignition[1161]: fetch: fetch complete Jul 6 23:27:09.346245 ignition[1161]: fetch: fetch passed Jul 6 23:27:09.346397 ignition[1161]: Ignition finished successfully Jul 6 23:27:09.357367 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:27:09.361304 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:27:09.409170 ignition[1168]: Ignition 2.21.0 Jul 6 23:27:09.409734 ignition[1168]: Stage: kargs Jul 6 23:27:09.410303 ignition[1168]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:09.410328 ignition[1168]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:09.410494 ignition[1168]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:09.420032 ignition[1168]: PUT result: OK Jul 6 23:27:09.424754 ignition[1168]: kargs: kargs passed Jul 6 23:27:09.424855 ignition[1168]: Ignition finished successfully Jul 6 23:27:09.430238 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:27:09.434851 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:27:09.485665 ignition[1174]: Ignition 2.21.0 Jul 6 23:27:09.486924 ignition[1174]: Stage: disks Jul 6 23:27:09.487491 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:09.487515 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:09.487656 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:09.493915 ignition[1174]: PUT result: OK Jul 6 23:27:09.500592 ignition[1174]: disks: disks passed Jul 6 23:27:09.500690 ignition[1174]: Ignition finished successfully Jul 6 23:27:09.510172 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:27:09.510731 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:27:09.516912 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:27:09.519953 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:27:09.522770 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:27:09.522830 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:27:09.533905 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:27:09.591958 systemd-fsck[1183]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 6 23:27:09.597327 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:27:09.603089 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:27:09.732311 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8d88df29-f94d-4ab8-8fb6-af875603e6d4 r/w with ordered data mode. Quota mode: none. Jul 6 23:27:09.733786 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:27:09.741574 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:27:09.748641 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:27:09.750999 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:27:09.751630 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:27:09.751703 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:27:09.751746 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:27:09.781998 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:27:09.787000 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:27:09.814398 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1202) Jul 6 23:27:09.818527 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:27:09.818570 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:27:09.818596 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 6 23:27:09.827154 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:27:10.197654 systemd-networkd[1149]: eth0: Gained IPv6LL Jul 6 23:27:10.208756 initrd-setup-root[1226]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:27:10.245941 initrd-setup-root[1233]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:27:10.254082 initrd-setup-root[1240]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:27:10.262315 initrd-setup-root[1247]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:27:10.556338 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:27:10.561109 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:27:10.569044 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:27:10.593863 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:27:10.596624 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:27:10.636190 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:27:10.647576 ignition[1315]: INFO : Ignition 2.21.0 Jul 6 23:27:10.647576 ignition[1315]: INFO : Stage: mount Jul 6 23:27:10.651747 ignition[1315]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:10.654080 ignition[1315]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:10.654080 ignition[1315]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:10.659450 ignition[1315]: INFO : PUT result: OK Jul 6 23:27:10.663198 ignition[1315]: INFO : mount: mount passed Jul 6 23:27:10.665014 ignition[1315]: INFO : Ignition finished successfully Jul 6 23:27:10.669410 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:27:10.675084 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:27:10.737608 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:27:10.771312 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1327) Jul 6 23:27:10.775504 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:27:10.775551 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:27:10.776834 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 6 23:27:10.784929 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:27:10.831322 ignition[1344]: INFO : Ignition 2.21.0 Jul 6 23:27:10.833346 ignition[1344]: INFO : Stage: files Jul 6 23:27:10.833346 ignition[1344]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:10.833346 ignition[1344]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:10.839932 ignition[1344]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:10.842601 ignition[1344]: INFO : PUT result: OK Jul 6 23:27:10.854020 ignition[1344]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:27:10.866703 ignition[1344]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:27:10.866703 ignition[1344]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:27:10.896148 ignition[1344]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:27:10.899219 ignition[1344]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:27:10.902654 unknown[1344]: wrote ssh authorized keys file for user: core Jul 6 23:27:10.905373 ignition[1344]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:27:10.909953 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 6 23:27:10.914149 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 6 23:27:11.000690 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:27:11.165764 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 6 23:27:11.169950 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:27:11.173713 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:27:11.173713 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:27:11.181026 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:27:11.184826 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:27:11.188733 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:27:11.192503 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:27:11.192503 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:27:11.203838 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:27:11.207968 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:27:11.212246 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:27:11.212246 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:27:11.212246 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:27:11.212246 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 6 23:27:11.810219 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:27:12.201522 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:27:12.206526 ignition[1344]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:27:12.214551 ignition[1344]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:27:12.219334 ignition[1344]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:27:12.219334 ignition[1344]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:27:12.219334 ignition[1344]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:27:12.219334 ignition[1344]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:27:12.239697 ignition[1344]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:27:12.239697 ignition[1344]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:27:12.239697 ignition[1344]: INFO : files: files passed Jul 6 23:27:12.239697 ignition[1344]: INFO : Ignition finished successfully Jul 6 23:27:12.226381 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:27:12.235520 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:27:12.258575 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:27:12.271134 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:27:12.273937 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:27:12.316029 initrd-setup-root-after-ignition[1374]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:27:12.316029 initrd-setup-root-after-ignition[1374]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:27:12.324900 initrd-setup-root-after-ignition[1378]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:27:12.331146 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:27:12.336700 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:27:12.343247 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:27:12.417310 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:27:12.419438 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:27:12.422581 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:27:12.427530 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:27:12.431908 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:27:12.433457 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:27:12.487253 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:27:12.494957 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:27:12.542204 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:27:12.547596 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:27:12.550559 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:27:12.556737 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:27:12.556976 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:27:12.564572 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:27:12.569260 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:27:12.577342 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:27:12.581738 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:27:12.584445 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:27:12.587103 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:27:12.591794 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:27:12.596476 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:27:12.600707 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:27:12.605938 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:27:12.610305 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:27:12.614554 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:27:12.614852 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:27:12.624941 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:27:12.628067 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:27:12.634692 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:27:12.638389 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:27:12.641148 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:27:12.641398 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:27:12.649399 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:27:12.649660 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:27:12.654390 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:27:12.654616 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:27:12.663855 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:27:12.672317 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:27:12.673140 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:27:12.698499 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:27:12.708076 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:27:12.711638 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:27:12.717576 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:27:12.717852 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:27:12.744924 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:27:12.745169 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:27:12.759338 ignition[1398]: INFO : Ignition 2.21.0 Jul 6 23:27:12.759338 ignition[1398]: INFO : Stage: umount Jul 6 23:27:12.763579 ignition[1398]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:12.765831 ignition[1398]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:12.768524 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:27:12.773473 ignition[1398]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:12.777372 ignition[1398]: INFO : PUT result: OK Jul 6 23:27:12.781893 ignition[1398]: INFO : umount: umount passed Jul 6 23:27:12.783741 ignition[1398]: INFO : Ignition finished successfully Jul 6 23:27:12.790417 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:27:12.790656 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:27:12.798989 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:27:12.799239 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:27:12.805296 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:27:12.805426 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:27:12.812527 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:27:12.812736 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:27:12.822736 systemd[1]: Stopped target network.target - Network. Jul 6 23:27:12.824705 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:27:12.824810 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:27:12.829204 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:27:12.831081 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:27:12.833160 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:27:12.833324 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:27:12.840184 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:27:12.843326 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:27:12.843401 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:27:12.851935 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:27:12.852008 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:27:12.855972 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:27:12.856070 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:27:12.858481 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:27:12.858560 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:27:12.868921 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:27:12.875873 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:27:12.900066 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:27:12.900264 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:27:12.916207 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:27:12.923795 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:27:12.923982 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:27:12.937923 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:27:12.938816 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:27:12.939050 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:27:12.945479 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 6 23:27:12.949994 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:27:12.950074 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:27:12.950164 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:27:12.950255 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:27:12.967587 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:27:12.971131 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:27:12.971243 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:27:12.981920 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:27:12.982033 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:27:12.986710 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:27:12.986803 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:27:12.993103 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:27:12.993185 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:27:12.998507 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:27:13.006456 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:27:13.006591 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:27:13.032513 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:27:13.034110 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:27:13.040850 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:27:13.041079 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:27:13.048351 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:27:13.048426 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:27:13.051119 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:27:13.051214 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:27:13.059751 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:27:13.059852 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:27:13.067731 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:27:13.067854 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:27:13.080407 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:27:13.083439 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 6 23:27:13.083567 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:27:13.089390 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:27:13.089503 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:27:13.101147 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:27:13.101242 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:13.111156 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 6 23:27:13.111304 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:27:13.111466 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:27:13.114255 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:27:13.114461 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:27:13.138039 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:27:13.139425 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:27:13.148185 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:27:13.150026 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:27:13.192085 systemd[1]: Switching root. Jul 6 23:27:13.237773 systemd-journald[257]: Journal stopped Jul 6 23:27:15.588400 systemd-journald[257]: Received SIGTERM from PID 1 (systemd). Jul 6 23:27:15.591431 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:27:15.591495 kernel: SELinux: policy capability open_perms=1 Jul 6 23:27:15.591528 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:27:15.591558 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:27:15.591593 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:27:15.591622 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:27:15.591651 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:27:15.591679 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:27:15.591707 kernel: SELinux: policy capability userspace_initial_context=0 Jul 6 23:27:15.591734 kernel: audit: type=1403 audit(1751844433.693:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:27:15.591774 systemd[1]: Successfully loaded SELinux policy in 90.868ms. Jul 6 23:27:15.591823 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.671ms. Jul 6 23:27:15.591856 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:27:15.591891 systemd[1]: Detected virtualization amazon. Jul 6 23:27:15.591919 systemd[1]: Detected architecture arm64. Jul 6 23:27:15.591947 systemd[1]: Detected first boot. Jul 6 23:27:15.591975 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:27:15.592005 zram_generator::config[1442]: No configuration found. Jul 6 23:27:15.592034 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:27:15.592063 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:27:15.592095 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:27:15.592126 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:27:15.592159 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:27:15.592190 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:27:15.592218 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:27:15.592249 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:27:15.595664 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:27:15.595724 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:27:15.595756 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:27:15.595786 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:27:15.595824 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:27:15.595856 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:27:15.598376 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:27:15.600789 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:27:15.600834 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:27:15.600875 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:27:15.600910 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:27:15.600943 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:27:15.600971 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:27:15.601008 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:27:15.601039 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:27:15.601068 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:27:15.601095 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:27:15.601125 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:27:15.601156 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:27:15.601184 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:27:15.601214 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:27:15.601247 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:27:15.601300 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:27:15.601333 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:27:15.601363 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:27:15.601396 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:27:15.601428 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:27:15.601458 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:27:15.601486 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:27:15.601513 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:27:15.601557 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:27:15.601588 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:27:15.601616 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:27:15.601646 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:27:15.601677 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:27:15.601705 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:27:15.601734 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:27:15.601766 systemd[1]: Reached target machines.target - Containers. Jul 6 23:27:15.601801 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:27:15.601835 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:27:15.601864 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:27:15.601891 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:27:15.601920 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:27:15.601947 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:27:15.601975 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:27:15.602005 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:27:15.602033 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:27:15.602068 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:27:15.602099 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:27:15.602127 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:27:15.602154 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:27:15.602182 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:27:15.602213 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:27:15.602244 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:27:15.606324 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:27:15.606404 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:27:15.606435 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:27:15.606467 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:27:15.606497 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:27:15.606528 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:27:15.606562 systemd[1]: Stopped verity-setup.service. Jul 6 23:27:15.606597 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:27:15.606629 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:27:15.606663 kernel: loop: module loaded Jul 6 23:27:15.606691 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:27:15.606720 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:27:15.606752 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:27:15.606781 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:27:15.606811 kernel: fuse: init (API version 7.41) Jul 6 23:27:15.606838 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:27:15.606869 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:27:15.606898 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:27:15.606926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:27:15.606954 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:27:15.606985 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:27:15.607019 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:27:15.607048 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:27:15.607076 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:27:15.607108 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:27:15.607137 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:27:15.607168 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:27:15.607250 systemd-journald[1521]: Collecting audit messages is disabled. Jul 6 23:27:15.607337 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:27:15.607371 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:27:15.607421 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:27:15.607453 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:27:15.607484 systemd-journald[1521]: Journal started Jul 6 23:27:15.607529 systemd-journald[1521]: Runtime Journal (/run/log/journal/ec21083854d5c4e2653eece1fb0a4fd5) is 8M, max 75.3M, 67.3M free. Jul 6 23:27:14.981891 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:27:15.003951 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 6 23:27:15.616374 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:27:15.616445 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:27:15.004787 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:27:15.621360 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:27:15.632193 kernel: ACPI: bus type drm_connector registered Jul 6 23:27:15.628760 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:27:15.638487 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:27:15.639035 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:27:15.680371 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:27:15.693590 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:27:15.697462 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:27:15.697530 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:27:15.704602 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:27:15.714626 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:27:15.717572 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:27:15.722056 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:27:15.728641 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:27:15.732489 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:27:15.736675 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:27:15.747671 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:27:15.751602 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:27:15.822634 systemd-journald[1521]: Time spent on flushing to /var/log/journal/ec21083854d5c4e2653eece1fb0a4fd5 is 86.223ms for 926 entries. Jul 6 23:27:15.822634 systemd-journald[1521]: System Journal (/var/log/journal/ec21083854d5c4e2653eece1fb0a4fd5) is 8M, max 195.6M, 187.6M free. Jul 6 23:27:15.937621 systemd-journald[1521]: Received client request to flush runtime journal. Jul 6 23:27:15.937715 kernel: loop0: detected capacity change from 0 to 138376 Jul 6 23:27:15.824962 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:27:15.828148 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:27:15.831033 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:27:15.837419 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:27:15.867776 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:27:15.874713 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:27:15.944011 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:27:15.966932 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:27:15.986344 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:27:15.987547 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:27:16.009915 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:27:16.020327 kernel: loop1: detected capacity change from 0 to 107312 Jul 6 23:27:16.033963 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:27:16.044057 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:27:16.108643 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Jul 6 23:27:16.109188 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Jul 6 23:27:16.119172 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:27:16.144339 kernel: loop2: detected capacity change from 0 to 61240 Jul 6 23:27:16.184320 kernel: loop3: detected capacity change from 0 to 203944 Jul 6 23:27:16.236538 kernel: loop4: detected capacity change from 0 to 138376 Jul 6 23:27:16.261319 kernel: loop5: detected capacity change from 0 to 107312 Jul 6 23:27:16.283337 kernel: loop6: detected capacity change from 0 to 61240 Jul 6 23:27:16.319334 kernel: loop7: detected capacity change from 0 to 203944 Jul 6 23:27:16.350889 (sd-merge)[1603]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 6 23:27:16.352547 (sd-merge)[1603]: Merged extensions into '/usr'. Jul 6 23:27:16.360680 systemd[1]: Reload requested from client PID 1570 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:27:16.360910 systemd[1]: Reloading... Jul 6 23:27:16.480399 zram_generator::config[1626]: No configuration found. Jul 6 23:27:16.768619 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:27:16.960410 systemd[1]: Reloading finished in 598 ms. Jul 6 23:27:17.008459 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:27:17.020538 systemd[1]: Starting ensure-sysext.service... Jul 6 23:27:17.029671 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:27:17.074078 systemd[1]: Reload requested from client PID 1680 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:27:17.074103 systemd[1]: Reloading... Jul 6 23:27:17.133162 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 6 23:27:17.133260 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 6 23:27:17.133887 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:27:17.134460 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:27:17.136353 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:27:17.136942 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Jul 6 23:27:17.137067 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Jul 6 23:27:17.150710 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:27:17.150738 systemd-tmpfiles[1681]: Skipping /boot Jul 6 23:27:17.229807 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:27:17.232363 systemd-tmpfiles[1681]: Skipping /boot Jul 6 23:27:17.279858 zram_generator::config[1715]: No configuration found. Jul 6 23:27:17.324798 ldconfig[1562]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:27:17.480298 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:27:17.659922 systemd[1]: Reloading finished in 585 ms. Jul 6 23:27:17.691327 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:27:17.694465 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:27:17.717401 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:27:17.733377 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:27:17.751601 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:27:17.758731 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:27:17.768960 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:27:17.781187 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:27:17.787455 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:27:17.797836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:27:17.804732 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:27:17.816901 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:27:17.822379 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:27:17.824760 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:27:17.824998 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:27:17.831610 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:27:17.831953 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:27:17.832155 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:27:17.847357 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:27:17.852417 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:27:17.854890 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:27:17.855109 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:27:17.855547 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:27:17.863502 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:27:17.884431 systemd[1]: Finished ensure-sysext.service. Jul 6 23:27:17.887037 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:27:17.904720 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:27:17.923054 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:27:17.928385 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:27:17.933270 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:27:17.954877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:27:17.958914 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:27:17.963573 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:27:17.968659 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:27:17.972042 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:27:17.973497 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:27:17.982899 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:27:17.983078 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:27:17.996401 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:27:18.003912 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:27:18.011384 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:27:18.046175 systemd-udevd[1769]: Using default interface naming scheme 'v255'. Jul 6 23:27:18.062508 augenrules[1804]: No rules Jul 6 23:27:18.066247 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:27:18.068185 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:27:18.099109 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:27:18.114838 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:27:18.123104 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:27:18.301238 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:27:18.316190 (udev-worker)[1821]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:27:18.630341 systemd-networkd[1815]: lo: Link UP Jul 6 23:27:18.630356 systemd-networkd[1815]: lo: Gained carrier Jul 6 23:27:18.635232 systemd-networkd[1815]: Enumeration completed Jul 6 23:27:18.635911 systemd-resolved[1767]: Positive Trust Anchors: Jul 6 23:27:18.635932 systemd-resolved[1767]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:27:18.635996 systemd-resolved[1767]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:27:18.637756 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:27:18.638738 systemd-networkd[1815]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:27:18.638747 systemd-networkd[1815]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:27:18.646652 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:27:18.651409 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:27:18.667829 systemd-resolved[1767]: Defaulting to hostname 'linux'. Jul 6 23:27:18.675363 systemd-networkd[1815]: eth0: Link UP Jul 6 23:27:18.675807 systemd-networkd[1815]: eth0: Gained carrier Jul 6 23:27:18.675844 systemd-networkd[1815]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:27:18.685444 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:27:18.688321 systemd[1]: Reached target network.target - Network. Jul 6 23:27:18.690302 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:27:18.692870 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:27:18.695569 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:27:18.699521 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:27:18.702667 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:27:18.705153 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:27:18.708495 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:27:18.711238 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:27:18.711317 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:27:18.713331 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:27:18.718429 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:27:18.725378 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:27:18.735682 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:27:18.740439 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:27:18.743205 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:27:18.751115 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:27:18.753601 systemd-networkd[1815]: eth0: DHCPv4 address 172.31.26.116/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 6 23:27:18.756381 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:27:18.760209 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:27:18.762856 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:27:18.765451 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:27:18.767634 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:27:18.767817 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:27:18.771617 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:27:18.778802 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:27:18.786605 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:27:18.792884 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:27:18.801766 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:27:18.808306 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:27:18.810546 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:27:18.815876 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:27:18.823685 systemd[1]: Started ntpd.service - Network Time Service. Jul 6 23:27:18.832337 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:27:18.839925 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 6 23:27:18.847695 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:27:18.861043 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:27:18.877632 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:27:18.882013 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:27:18.886478 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:27:18.895414 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:27:18.907151 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:27:18.911681 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:27:18.914978 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:27:18.967319 jq[1888]: false Jul 6 23:27:18.967178 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:27:18.969157 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:27:19.054102 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:27:19.055318 jq[1905]: true Jul 6 23:27:19.057368 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:27:19.065453 extend-filesystems[1889]: Found /dev/nvme0n1p6 Jul 6 23:27:19.091188 (ntainerd)[1930]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:27:19.112236 extend-filesystems[1889]: Found /dev/nvme0n1p9 Jul 6 23:27:19.093900 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:27:19.094335 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:27:19.122007 extend-filesystems[1889]: Checking size of /dev/nvme0n1p9 Jul 6 23:27:19.131966 dbus-daemon[1884]: [system] SELinux support is enabled Jul 6 23:27:19.133888 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:27:19.141723 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:27:19.141778 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:27:19.144664 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:27:19.144700 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:27:19.160978 dbus-daemon[1884]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1815 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 6 23:27:19.170661 dbus-daemon[1884]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 6 23:27:19.174788 tar[1915]: linux-arm64/helm Jul 6 23:27:19.178705 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 6 23:27:19.185770 jq[1952]: true Jul 6 23:27:19.207983 extend-filesystems[1889]: Resized partition /dev/nvme0n1p9 Jul 6 23:27:19.216119 extend-filesystems[1970]: resize2fs 1.47.2 (1-Jan-2025) Jul 6 23:27:19.221327 update_engine[1902]: I20250706 23:27:19.218780 1902 main.cc:92] Flatcar Update Engine starting Jul 6 23:27:19.236582 update_engine[1902]: I20250706 23:27:19.235020 1902 update_check_scheduler.cc:74] Next update check in 6m59s Jul 6 23:27:19.233136 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:27:19.245633 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 6 23:27:19.251659 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:27:19.260378 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 6 23:27:19.348772 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 6 23:27:19.367995 extend-filesystems[1970]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 6 23:27:19.367995 extend-filesystems[1970]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:27:19.367995 extend-filesystems[1970]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 6 23:27:19.384004 extend-filesystems[1889]: Resized filesystem in /dev/nvme0n1p9 Jul 6 23:27:19.371310 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:27:19.415081 coreos-metadata[1882]: Jul 06 23:27:19.414 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 6 23:27:19.427965 coreos-metadata[1882]: Jul 06 23:27:19.424 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 6 23:27:19.429187 coreos-metadata[1882]: Jul 06 23:27:19.428 INFO Fetch successful Jul 6 23:27:19.429187 coreos-metadata[1882]: Jul 06 23:27:19.429 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 6 23:27:19.430876 coreos-metadata[1882]: Jul 06 23:27:19.430 INFO Fetch successful Jul 6 23:27:19.430876 coreos-metadata[1882]: Jul 06 23:27:19.430 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 6 23:27:19.435678 coreos-metadata[1882]: Jul 06 23:27:19.435 INFO Fetch successful Jul 6 23:27:19.435678 coreos-metadata[1882]: Jul 06 23:27:19.435 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 6 23:27:19.437412 coreos-metadata[1882]: Jul 06 23:27:19.437 INFO Fetch successful Jul 6 23:27:19.437412 coreos-metadata[1882]: Jul 06 23:27:19.437 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 6 23:27:19.443384 coreos-metadata[1882]: Jul 06 23:27:19.442 INFO Fetch failed with 404: resource not found Jul 6 23:27:19.443498 coreos-metadata[1882]: Jul 06 23:27:19.443 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 6 23:27:19.443498 coreos-metadata[1882]: Jul 06 23:27:19.443 INFO Fetch successful Jul 6 23:27:19.443498 coreos-metadata[1882]: Jul 06 23:27:19.443 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 6 23:27:19.443498 coreos-metadata[1882]: Jul 06 23:27:19.443 INFO Fetch successful Jul 6 23:27:19.443498 coreos-metadata[1882]: Jul 06 23:27:19.443 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 6 23:27:19.443783 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:27:19.450453 coreos-metadata[1882]: Jul 06 23:27:19.449 INFO Fetch successful Jul 6 23:27:19.450453 coreos-metadata[1882]: Jul 06 23:27:19.449 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 6 23:27:19.454378 coreos-metadata[1882]: Jul 06 23:27:19.454 INFO Fetch successful Jul 6 23:27:19.454378 coreos-metadata[1882]: Jul 06 23:27:19.454 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 6 23:27:19.457359 coreos-metadata[1882]: Jul 06 23:27:19.454 INFO Fetch successful Jul 6 23:27:19.495903 bash[2007]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:27:19.521617 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:27:19.534031 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:27:19.546681 systemd[1]: Starting sshkeys.service... Jul 6 23:27:19.594678 ntpd[1892]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:18:00 UTC 2025 (1): Starting Jul 6 23:27:19.599730 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:18:00 UTC 2025 (1): Starting Jul 6 23:27:19.600148 ntpd[1892]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 6 23:27:19.602671 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 6 23:27:19.602671 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: ---------------------------------------------------- Jul 6 23:27:19.602671 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: ntp-4 is maintained by Network Time Foundation, Jul 6 23:27:19.602671 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 6 23:27:19.602671 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: corporation. Support and training for ntp-4 are Jul 6 23:27:19.602671 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: available at https://www.nwtime.org/support Jul 6 23:27:19.602671 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: ---------------------------------------------------- Jul 6 23:27:19.600183 ntpd[1892]: ---------------------------------------------------- Jul 6 23:27:19.600201 ntpd[1892]: ntp-4 is maintained by Network Time Foundation, Jul 6 23:27:19.600218 ntpd[1892]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 6 23:27:19.600234 ntpd[1892]: corporation. Support and training for ntp-4 are Jul 6 23:27:19.600251 ntpd[1892]: available at https://www.nwtime.org/support Jul 6 23:27:19.600267 ntpd[1892]: ---------------------------------------------------- Jul 6 23:27:19.611921 ntpd[1892]: proto: precision = 0.096 usec (-23) Jul 6 23:27:19.612269 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: proto: precision = 0.096 usec (-23) Jul 6 23:27:19.613929 ntpd[1892]: basedate set to 2025-06-24 Jul 6 23:27:19.622404 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: basedate set to 2025-06-24 Jul 6 23:27:19.622404 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: gps base set to 2025-06-29 (week 2373) Jul 6 23:27:19.622404 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: Listen and drop on 0 v6wildcard [::]:123 Jul 6 23:27:19.622404 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 6 23:27:19.622404 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: Listen normally on 2 lo 127.0.0.1:123 Jul 6 23:27:19.622404 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: Listen normally on 3 eth0 172.31.26.116:123 Jul 6 23:27:19.622404 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: Listen normally on 4 lo [::1]:123 Jul 6 23:27:19.622404 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: bind(21) AF_INET6 fe80::4fb:bfff:fe56:4ecd%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:27:19.622404 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: unable to create socket on eth0 (5) for fe80::4fb:bfff:fe56:4ecd%2#123 Jul 6 23:27:19.613966 ntpd[1892]: gps base set to 2025-06-29 (week 2373) Jul 6 23:27:19.623954 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: failed to init interface for address fe80::4fb:bfff:fe56:4ecd%2 Jul 6 23:27:19.623954 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: Listening on routing socket on fd #21 for interface updates Jul 6 23:27:19.619777 ntpd[1892]: Listen and drop on 0 v6wildcard [::]:123 Jul 6 23:27:19.619861 ntpd[1892]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 6 23:27:19.620124 ntpd[1892]: Listen normally on 2 lo 127.0.0.1:123 Jul 6 23:27:19.620184 ntpd[1892]: Listen normally on 3 eth0 172.31.26.116:123 Jul 6 23:27:19.620247 ntpd[1892]: Listen normally on 4 lo [::1]:123 Jul 6 23:27:19.622350 ntpd[1892]: bind(21) AF_INET6 fe80::4fb:bfff:fe56:4ecd%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:27:19.622398 ntpd[1892]: unable to create socket on eth0 (5) for fe80::4fb:bfff:fe56:4ecd%2#123 Jul 6 23:27:19.622424 ntpd[1892]: failed to init interface for address fe80::4fb:bfff:fe56:4ecd%2 Jul 6 23:27:19.622479 ntpd[1892]: Listening on routing socket on fd #21 for interface updates Jul 6 23:27:19.631183 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:27:19.635570 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:27:19.641371 ntpd[1892]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:27:19.645814 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:27:19.645814 ntpd[1892]: 6 Jul 23:27:19 ntpd[1892]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:27:19.641434 ntpd[1892]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:27:19.741104 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:27:19.748498 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:27:19.797426 systemd-networkd[1815]: eth0: Gained IPv6LL Jul 6 23:27:19.859395 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:27:19.885925 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:27:19.893895 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 6 23:27:19.907810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:19.921592 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:27:19.960360 containerd[1930]: time="2025-07-06T23:27:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 6 23:27:19.960762 containerd[1930]: time="2025-07-06T23:27:19.960672732Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 6 23:27:20.085488 containerd[1930]: time="2025-07-06T23:27:20.085042137Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.952µs" Jul 6 23:27:20.092839 containerd[1930]: time="2025-07-06T23:27:20.092352921Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 6 23:27:20.093867 systemd-logind[1901]: New seat seat0. Jul 6 23:27:20.096993 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:27:20.105795 containerd[1930]: time="2025-07-06T23:27:20.105063273Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 6 23:27:20.105795 containerd[1930]: time="2025-07-06T23:27:20.105377901Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 6 23:27:20.105795 containerd[1930]: time="2025-07-06T23:27:20.105410229Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 6 23:27:20.105795 containerd[1930]: time="2025-07-06T23:27:20.105459981Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:27:20.105795 containerd[1930]: time="2025-07-06T23:27:20.105602949Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:27:20.105795 containerd[1930]: time="2025-07-06T23:27:20.105628017Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:27:20.157327 containerd[1930]: time="2025-07-06T23:27:20.154917657Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:27:20.157327 containerd[1930]: time="2025-07-06T23:27:20.155048157Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:27:20.157327 containerd[1930]: time="2025-07-06T23:27:20.155112633Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:27:20.157327 containerd[1930]: time="2025-07-06T23:27:20.155138925Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 6 23:27:20.157327 containerd[1930]: time="2025-07-06T23:27:20.155539833Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 6 23:27:20.157327 containerd[1930]: time="2025-07-06T23:27:20.156207021Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:27:20.157327 containerd[1930]: time="2025-07-06T23:27:20.156348729Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:27:20.157726 containerd[1930]: time="2025-07-06T23:27:20.156375561Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 6 23:27:20.165525 containerd[1930]: time="2025-07-06T23:27:20.165457689Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 6 23:27:20.166155 containerd[1930]: time="2025-07-06T23:27:20.166107597Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 6 23:27:20.168402 containerd[1930]: time="2025-07-06T23:27:20.168329985Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:27:20.186193 containerd[1930]: time="2025-07-06T23:27:20.182236941Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 6 23:27:20.186193 containerd[1930]: time="2025-07-06T23:27:20.182378397Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 6 23:27:20.186193 containerd[1930]: time="2025-07-06T23:27:20.182417373Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 6 23:27:20.186193 containerd[1930]: time="2025-07-06T23:27:20.182460693Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 6 23:27:20.186193 containerd[1930]: time="2025-07-06T23:27:20.182515797Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 6 23:27:20.186193 containerd[1930]: time="2025-07-06T23:27:20.182556777Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 6 23:27:20.186193 containerd[1930]: time="2025-07-06T23:27:20.182588913Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 6 23:27:20.186193 containerd[1930]: time="2025-07-06T23:27:20.182629593Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 6 23:27:20.186193 containerd[1930]: time="2025-07-06T23:27:20.182668497Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 6 23:27:20.186193 containerd[1930]: time="2025-07-06T23:27:20.182704521Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 6 23:27:20.185941 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:27:20.195354 containerd[1930]: time="2025-07-06T23:27:20.193048677Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 6 23:27:20.195354 containerd[1930]: time="2025-07-06T23:27:20.193124829Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 6 23:27:20.197484 containerd[1930]: time="2025-07-06T23:27:20.197414133Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 6 23:27:20.213592 containerd[1930]: time="2025-07-06T23:27:20.210330970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 6 23:27:20.213592 containerd[1930]: time="2025-07-06T23:27:20.211419346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 6 23:27:20.213592 containerd[1930]: time="2025-07-06T23:27:20.211453462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 6 23:27:20.213592 containerd[1930]: time="2025-07-06T23:27:20.211506718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 6 23:27:20.213592 containerd[1930]: time="2025-07-06T23:27:20.211536814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 6 23:27:20.213592 containerd[1930]: time="2025-07-06T23:27:20.211593802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 6 23:27:20.213592 containerd[1930]: time="2025-07-06T23:27:20.211624330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 6 23:27:20.213592 containerd[1930]: time="2025-07-06T23:27:20.211681474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 6 23:27:20.213592 containerd[1930]: time="2025-07-06T23:27:20.211712086Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 6 23:27:20.213592 containerd[1930]: time="2025-07-06T23:27:20.213318286Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 6 23:27:20.218527 containerd[1930]: time="2025-07-06T23:27:20.218360482Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 6 23:27:20.226790 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 6 23:27:20.232534 containerd[1930]: time="2025-07-06T23:27:20.227440438Z" level=info msg="Start snapshots syncer" Jul 6 23:27:20.236535 containerd[1930]: time="2025-07-06T23:27:20.236356366Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 6 23:27:20.241785 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:27:20.250465 containerd[1930]: time="2025-07-06T23:27:20.239713054Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 6 23:27:20.250465 containerd[1930]: time="2025-07-06T23:27:20.246452950Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 6 23:27:20.250740 containerd[1930]: time="2025-07-06T23:27:20.246713758Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 6 23:27:20.250740 containerd[1930]: time="2025-07-06T23:27:20.247603546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 6 23:27:20.250740 containerd[1930]: time="2025-07-06T23:27:20.247662142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 6 23:27:20.250740 containerd[1930]: time="2025-07-06T23:27:20.247701958Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 6 23:27:20.250740 containerd[1930]: time="2025-07-06T23:27:20.247729606Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 6 23:27:20.250740 containerd[1930]: time="2025-07-06T23:27:20.247761634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 6 23:27:20.250740 containerd[1930]: time="2025-07-06T23:27:20.247809910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 6 23:27:20.250740 containerd[1930]: time="2025-07-06T23:27:20.247839598Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 6 23:27:20.250740 containerd[1930]: time="2025-07-06T23:27:20.247894918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 6 23:27:20.250740 containerd[1930]: time="2025-07-06T23:27:20.247924630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 6 23:27:20.250740 containerd[1930]: time="2025-07-06T23:27:20.247952578Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 6 23:27:20.250740 containerd[1930]: time="2025-07-06T23:27:20.248030074Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:27:20.250740 containerd[1930]: time="2025-07-06T23:27:20.248072926Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:27:20.250740 containerd[1930]: time="2025-07-06T23:27:20.248095642Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:27:20.251508 containerd[1930]: time="2025-07-06T23:27:20.248121958Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:27:20.251508 containerd[1930]: time="2025-07-06T23:27:20.248143318Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 6 23:27:20.251508 containerd[1930]: time="2025-07-06T23:27:20.248182570Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 6 23:27:20.251508 containerd[1930]: time="2025-07-06T23:27:20.248210014Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 6 23:27:20.251508 containerd[1930]: time="2025-07-06T23:27:20.248400538Z" level=info msg="runtime interface created" Jul 6 23:27:20.251508 containerd[1930]: time="2025-07-06T23:27:20.248419882Z" level=info msg="created NRI interface" Jul 6 23:27:20.251508 containerd[1930]: time="2025-07-06T23:27:20.248446930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 6 23:27:20.251508 containerd[1930]: time="2025-07-06T23:27:20.248477338Z" level=info msg="Connect containerd service" Jul 6 23:27:20.251508 containerd[1930]: time="2025-07-06T23:27:20.248543758Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:27:20.268738 containerd[1930]: time="2025-07-06T23:27:20.267479518Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:27:20.288512 amazon-ssm-agent[2048]: Initializing new seelog logger Jul 6 23:27:20.288917 amazon-ssm-agent[2048]: New Seelog Logger Creation Complete Jul 6 23:27:20.288917 amazon-ssm-agent[2048]: 2025/07/06 23:27:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:20.288917 amazon-ssm-agent[2048]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:20.291312 amazon-ssm-agent[2048]: 2025/07/06 23:27:20 processing appconfig overrides Jul 6 23:27:20.291312 amazon-ssm-agent[2048]: 2025/07/06 23:27:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:20.291312 amazon-ssm-agent[2048]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:20.291312 amazon-ssm-agent[2048]: 2025/07/06 23:27:20 processing appconfig overrides Jul 6 23:27:20.291312 amazon-ssm-agent[2048]: 2025/07/06 23:27:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:20.291312 amazon-ssm-agent[2048]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:20.292872 amazon-ssm-agent[2048]: 2025/07/06 23:27:20 processing appconfig overrides Jul 6 23:27:20.294387 amazon-ssm-agent[2048]: 2025-07-06 23:27:20.2899 INFO Proxy environment variables: Jul 6 23:27:20.305612 amazon-ssm-agent[2048]: 2025/07/06 23:27:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:20.305612 amazon-ssm-agent[2048]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:20.305612 amazon-ssm-agent[2048]: 2025/07/06 23:27:20 processing appconfig overrides Jul 6 23:27:20.346180 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:27:20.396407 coreos-metadata[2045]: Jul 06 23:27:20.396 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 6 23:27:20.399826 amazon-ssm-agent[2048]: 2025-07-06 23:27:20.2905 INFO https_proxy: Jul 6 23:27:20.402785 coreos-metadata[2045]: Jul 06 23:27:20.402 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 6 23:27:20.403567 coreos-metadata[2045]: Jul 06 23:27:20.403 INFO Fetch successful Jul 6 23:27:20.403660 coreos-metadata[2045]: Jul 06 23:27:20.403 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 6 23:27:20.407537 coreos-metadata[2045]: Jul 06 23:27:20.407 INFO Fetch successful Jul 6 23:27:20.416418 unknown[2045]: wrote ssh authorized keys file for user: core Jul 6 23:27:20.435513 locksmithd[1973]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:27:20.452017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:20.503180 amazon-ssm-agent[2048]: 2025-07-06 23:27:20.2905 INFO http_proxy: Jul 6 23:27:20.508355 update-ssh-keys[2084]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:27:20.511210 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:27:20.522971 systemd[1]: Finished sshkeys.service. Jul 6 23:27:20.607377 amazon-ssm-agent[2048]: 2025-07-06 23:27:20.2905 INFO no_proxy: Jul 6 23:27:20.703451 amazon-ssm-agent[2048]: 2025-07-06 23:27:20.2907 INFO Checking if agent identity type OnPrem can be assumed Jul 6 23:27:20.711025 systemd-logind[1901]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 6 23:27:20.804214 amazon-ssm-agent[2048]: 2025-07-06 23:27:20.2908 INFO Checking if agent identity type EC2 can be assumed Jul 6 23:27:20.812863 systemd-logind[1901]: Watching system buttons on /dev/input/event0 (Power Button) Jul 6 23:27:20.842999 containerd[1930]: time="2025-07-06T23:27:20.842728789Z" level=info msg="Start subscribing containerd event" Jul 6 23:27:20.842999 containerd[1930]: time="2025-07-06T23:27:20.842824921Z" level=info msg="Start recovering state" Jul 6 23:27:20.842999 containerd[1930]: time="2025-07-06T23:27:20.842964241Z" level=info msg="Start event monitor" Jul 6 23:27:20.842999 containerd[1930]: time="2025-07-06T23:27:20.842991685Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:27:20.842999 containerd[1930]: time="2025-07-06T23:27:20.843008881Z" level=info msg="Start streaming server" Jul 6 23:27:20.843326 containerd[1930]: time="2025-07-06T23:27:20.843028669Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 6 23:27:20.843326 containerd[1930]: time="2025-07-06T23:27:20.843045157Z" level=info msg="runtime interface starting up..." Jul 6 23:27:20.843326 containerd[1930]: time="2025-07-06T23:27:20.843059449Z" level=info msg="starting plugins..." Jul 6 23:27:20.843326 containerd[1930]: time="2025-07-06T23:27:20.843086749Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 6 23:27:20.850970 containerd[1930]: time="2025-07-06T23:27:20.845316517Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:27:20.850970 containerd[1930]: time="2025-07-06T23:27:20.845454949Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:27:20.850970 containerd[1930]: time="2025-07-06T23:27:20.847517149Z" level=info msg="containerd successfully booted in 0.892245s" Jul 6 23:27:20.847654 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:27:20.903385 amazon-ssm-agent[2048]: 2025-07-06 23:27:20.5826 INFO Agent will take identity from EC2 Jul 6 23:27:20.994937 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:21.006068 amazon-ssm-agent[2048]: 2025-07-06 23:27:20.5873 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 6 23:27:21.065005 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 6 23:27:21.077936 dbus-daemon[1884]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 6 23:27:21.086300 dbus-daemon[1884]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1963 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 6 23:27:21.105561 amazon-ssm-agent[2048]: 2025-07-06 23:27:20.5874 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 6 23:27:21.106727 systemd[1]: Starting polkit.service - Authorization Manager... Jul 6 23:27:21.205576 amazon-ssm-agent[2048]: 2025-07-06 23:27:20.5874 INFO [amazon-ssm-agent] Starting Core Agent Jul 6 23:27:21.306943 amazon-ssm-agent[2048]: 2025-07-06 23:27:20.5874 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 6 23:27:21.407510 amazon-ssm-agent[2048]: 2025-07-06 23:27:20.5874 INFO [Registrar] Starting registrar module Jul 6 23:27:21.509365 amazon-ssm-agent[2048]: 2025-07-06 23:27:20.5934 INFO [EC2Identity] Checking disk for registration info Jul 6 23:27:21.613743 amazon-ssm-agent[2048]: 2025-07-06 23:27:20.5935 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 6 23:27:21.716341 amazon-ssm-agent[2048]: 2025-07-06 23:27:20.5936 INFO [EC2Identity] Generating registration keypair Jul 6 23:27:21.937363 polkitd[2111]: Started polkitd version 126 Jul 6 23:27:21.996924 polkitd[2111]: Loading rules from directory /etc/polkit-1/rules.d Jul 6 23:27:21.999586 polkitd[2111]: Loading rules from directory /run/polkit-1/rules.d Jul 6 23:27:21.999682 polkitd[2111]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 6 23:27:22.000503 polkitd[2111]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 6 23:27:22.002627 systemd[1]: Started polkit.service - Authorization Manager. Jul 6 23:27:22.000569 polkitd[2111]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 6 23:27:22.000649 polkitd[2111]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 6 23:27:22.002230 polkitd[2111]: Finished loading, compiling and executing 2 rules Jul 6 23:27:22.016108 dbus-daemon[1884]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 6 23:27:22.030310 polkitd[2111]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 6 23:27:22.112813 systemd-hostnamed[1963]: Hostname set to (transient) Jul 6 23:27:22.112982 systemd-resolved[1767]: System hostname changed to 'ip-172-31-26-116'. Jul 6 23:27:22.211364 tar[1915]: linux-arm64/LICENSE Jul 6 23:27:22.211364 tar[1915]: linux-arm64/README.md Jul 6 23:27:22.256411 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:27:22.355617 sshd_keygen[1941]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:27:22.361428 amazon-ssm-agent[2048]: 2025-07-06 23:27:22.3608 INFO [EC2Identity] Checking write access before registering Jul 6 23:27:22.409251 amazon-ssm-agent[2048]: 2025/07/06 23:27:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:22.409251 amazon-ssm-agent[2048]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:22.409251 amazon-ssm-agent[2048]: 2025/07/06 23:27:22 processing appconfig overrides Jul 6 23:27:22.421972 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:27:22.428346 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:27:22.436778 systemd[1]: Started sshd@0-172.31.26.116:22-139.178.89.65:41694.service - OpenSSH per-connection server daemon (139.178.89.65:41694). Jul 6 23:27:22.462357 amazon-ssm-agent[2048]: 2025-07-06 23:27:22.3627 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 6 23:27:22.465706 amazon-ssm-agent[2048]: 2025-07-06 23:27:22.4086 INFO [EC2Identity] EC2 registration was successful. Jul 6 23:27:22.466828 amazon-ssm-agent[2048]: 2025-07-06 23:27:22.4087 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 6 23:27:22.469315 amazon-ssm-agent[2048]: 2025-07-06 23:27:22.4088 INFO [CredentialRefresher] credentialRefresher has started Jul 6 23:27:22.469315 amazon-ssm-agent[2048]: 2025-07-06 23:27:22.4088 INFO [CredentialRefresher] Starting credentials refresher loop Jul 6 23:27:22.469315 amazon-ssm-agent[2048]: 2025-07-06 23:27:22.4651 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 6 23:27:22.469315 amazon-ssm-agent[2048]: 2025-07-06 23:27:22.4655 INFO [CredentialRefresher] Credentials ready Jul 6 23:27:22.477035 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:27:22.478743 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:27:22.487507 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:27:22.496911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:22.517876 (kubelet)[2221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:27:22.535186 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:27:22.542770 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:27:22.548776 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:27:22.551565 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:27:22.554576 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:27:22.561377 systemd[1]: Startup finished in 3.865s (kernel) + 8.981s (initrd) + 8.957s (userspace) = 21.804s. Jul 6 23:27:22.562218 amazon-ssm-agent[2048]: 2025-07-06 23:27:22.4696 INFO [CredentialRefresher] Next credential rotation will be in 29.9999258897 minutes Jul 6 23:27:22.601397 ntpd[1892]: Listen normally on 6 eth0 [fe80::4fb:bfff:fe56:4ecd%2]:123 Jul 6 23:27:22.604061 ntpd[1892]: 6 Jul 23:27:22 ntpd[1892]: Listen normally on 6 eth0 [fe80::4fb:bfff:fe56:4ecd%2]:123 Jul 6 23:27:22.792458 sshd[2211]: Accepted publickey for core from 139.178.89.65 port 41694 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:22.795969 sshd-session[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:22.809226 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:27:22.811213 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:27:22.833994 systemd-logind[1901]: New session 1 of user core. Jul 6 23:27:22.856768 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:27:22.862918 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:27:22.891839 (systemd)[2236]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:27:22.898606 systemd-logind[1901]: New session c1 of user core. Jul 6 23:27:23.196627 systemd[2236]: Queued start job for default target default.target. Jul 6 23:27:23.204081 systemd[2236]: Created slice app.slice - User Application Slice. Jul 6 23:27:23.204376 systemd[2236]: Reached target paths.target - Paths. Jul 6 23:27:23.204563 systemd[2236]: Reached target timers.target - Timers. Jul 6 23:27:23.207029 systemd[2236]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:27:23.248324 systemd[2236]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:27:23.248592 systemd[2236]: Reached target sockets.target - Sockets. Jul 6 23:27:23.248678 systemd[2236]: Reached target basic.target - Basic System. Jul 6 23:27:23.248758 systemd[2236]: Reached target default.target - Main User Target. Jul 6 23:27:23.248818 systemd[2236]: Startup finished in 336ms. Jul 6 23:27:23.249059 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:27:23.256675 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:27:23.417931 systemd[1]: Started sshd@1-172.31.26.116:22-139.178.89.65:41698.service - OpenSSH per-connection server daemon (139.178.89.65:41698). Jul 6 23:27:23.510466 amazon-ssm-agent[2048]: 2025-07-06 23:27:23.5102 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 6 23:27:23.594299 kubelet[2221]: E0706 23:27:23.593873 2221 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:27:23.598421 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:27:23.598770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:27:23.603401 systemd[1]: kubelet.service: Consumed 1.510s CPU time, 257.1M memory peak. Jul 6 23:27:23.611026 amazon-ssm-agent[2048]: 2025-07-06 23:27:23.5213 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2251) started Jul 6 23:27:23.639358 sshd[2247]: Accepted publickey for core from 139.178.89.65 port 41698 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:23.642508 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:23.654363 systemd-logind[1901]: New session 2 of user core. Jul 6 23:27:23.660582 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:27:23.711166 amazon-ssm-agent[2048]: 2025-07-06 23:27:23.5213 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 6 23:27:23.792302 sshd[2258]: Connection closed by 139.178.89.65 port 41698 Jul 6 23:27:23.793746 sshd-session[2247]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:23.801486 systemd-logind[1901]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:27:23.802833 systemd[1]: sshd@1-172.31.26.116:22-139.178.89.65:41698.service: Deactivated successfully. Jul 6 23:27:23.806187 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:27:23.810717 systemd-logind[1901]: Removed session 2. Jul 6 23:27:23.831939 systemd[1]: Started sshd@2-172.31.26.116:22-139.178.89.65:41714.service - OpenSSH per-connection server daemon (139.178.89.65:41714). Jul 6 23:27:24.040519 sshd[2269]: Accepted publickey for core from 139.178.89.65 port 41714 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:24.042970 sshd-session[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:24.052367 systemd-logind[1901]: New session 3 of user core. Jul 6 23:27:24.055565 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:27:24.176963 sshd[2271]: Connection closed by 139.178.89.65 port 41714 Jul 6 23:27:24.177552 sshd-session[2269]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:24.184381 systemd[1]: sshd@2-172.31.26.116:22-139.178.89.65:41714.service: Deactivated successfully. Jul 6 23:27:24.187552 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:27:24.191804 systemd-logind[1901]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:27:24.194721 systemd-logind[1901]: Removed session 3. Jul 6 23:27:24.211617 systemd[1]: Started sshd@3-172.31.26.116:22-139.178.89.65:41728.service - OpenSSH per-connection server daemon (139.178.89.65:41728). Jul 6 23:27:24.421963 sshd[2277]: Accepted publickey for core from 139.178.89.65 port 41728 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:24.424470 sshd-session[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:24.432330 systemd-logind[1901]: New session 4 of user core. Jul 6 23:27:24.441523 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:27:24.567757 sshd[2279]: Connection closed by 139.178.89.65 port 41728 Jul 6 23:27:24.568609 sshd-session[2277]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:24.575220 systemd-logind[1901]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:27:24.576092 systemd[1]: sshd@3-172.31.26.116:22-139.178.89.65:41728.service: Deactivated successfully. Jul 6 23:27:24.579726 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:27:24.583192 systemd-logind[1901]: Removed session 4. Jul 6 23:27:24.606045 systemd[1]: Started sshd@4-172.31.26.116:22-139.178.89.65:41732.service - OpenSSH per-connection server daemon (139.178.89.65:41732). Jul 6 23:27:24.811010 sshd[2285]: Accepted publickey for core from 139.178.89.65 port 41732 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:24.813158 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:24.820927 systemd-logind[1901]: New session 5 of user core. Jul 6 23:27:24.831560 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:27:24.958543 sudo[2288]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:27:24.959147 sudo[2288]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:27:24.977759 sudo[2288]: pam_unix(sudo:session): session closed for user root Jul 6 23:27:25.002173 sshd[2287]: Connection closed by 139.178.89.65 port 41732 Jul 6 23:27:25.001982 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:25.007883 systemd[1]: sshd@4-172.31.26.116:22-139.178.89.65:41732.service: Deactivated successfully. Jul 6 23:27:25.010699 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:27:25.013914 systemd-logind[1901]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:27:25.017414 systemd-logind[1901]: Removed session 5. Jul 6 23:27:25.040375 systemd[1]: Started sshd@5-172.31.26.116:22-139.178.89.65:41736.service - OpenSSH per-connection server daemon (139.178.89.65:41736). Jul 6 23:27:25.245506 sshd[2294]: Accepted publickey for core from 139.178.89.65 port 41736 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:25.248235 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:25.257366 systemd-logind[1901]: New session 6 of user core. Jul 6 23:27:25.265509 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:27:25.370725 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:27:25.371407 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:27:25.386131 sudo[2298]: pam_unix(sudo:session): session closed for user root Jul 6 23:27:25.398602 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:27:25.399196 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:27:25.417089 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:27:25.476114 augenrules[2320]: No rules Jul 6 23:27:25.478420 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:27:25.478867 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:27:25.481014 sudo[2297]: pam_unix(sudo:session): session closed for user root Jul 6 23:27:25.505011 sshd[2296]: Connection closed by 139.178.89.65 port 41736 Jul 6 23:27:25.504817 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:25.512482 systemd[1]: sshd@5-172.31.26.116:22-139.178.89.65:41736.service: Deactivated successfully. Jul 6 23:27:25.516229 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:27:25.519110 systemd-logind[1901]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:27:25.521431 systemd-logind[1901]: Removed session 6. Jul 6 23:27:25.549614 systemd[1]: Started sshd@6-172.31.26.116:22-139.178.89.65:41740.service - OpenSSH per-connection server daemon (139.178.89.65:41740). Jul 6 23:27:25.770244 sshd[2329]: Accepted publickey for core from 139.178.89.65 port 41740 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:25.773306 sshd-session[2329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:25.780713 systemd-logind[1901]: New session 7 of user core. Jul 6 23:27:25.790572 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:27:25.893473 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:27:25.894074 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:27:26.825119 systemd-resolved[1767]: Clock change detected. Flushing caches. Jul 6 23:27:27.261272 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:27:27.277569 (dockerd)[2352]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:27:27.843201 dockerd[2352]: time="2025-07-06T23:27:27.843105729Z" level=info msg="Starting up" Jul 6 23:27:27.848096 dockerd[2352]: time="2025-07-06T23:27:27.847477917Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 6 23:27:27.891828 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1257965137-merged.mount: Deactivated successfully. Jul 6 23:27:27.924571 systemd[1]: var-lib-docker-metacopy\x2dcheck2271838808-merged.mount: Deactivated successfully. Jul 6 23:27:27.941659 dockerd[2352]: time="2025-07-06T23:27:27.941470137Z" level=info msg="Loading containers: start." Jul 6 23:27:27.965102 kernel: Initializing XFRM netlink socket Jul 6 23:27:28.330868 (udev-worker)[2373]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:27:28.407726 systemd-networkd[1815]: docker0: Link UP Jul 6 23:27:28.413080 dockerd[2352]: time="2025-07-06T23:27:28.412483327Z" level=info msg="Loading containers: done." Jul 6 23:27:28.441681 dockerd[2352]: time="2025-07-06T23:27:28.441630272Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:27:28.441991 dockerd[2352]: time="2025-07-06T23:27:28.441961064Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 6 23:27:28.442337 dockerd[2352]: time="2025-07-06T23:27:28.442308392Z" level=info msg="Initializing buildkit" Jul 6 23:27:28.479375 dockerd[2352]: time="2025-07-06T23:27:28.479322860Z" level=info msg="Completed buildkit initialization" Jul 6 23:27:28.496465 dockerd[2352]: time="2025-07-06T23:27:28.496401476Z" level=info msg="Daemon has completed initialization" Jul 6 23:27:28.497454 dockerd[2352]: time="2025-07-06T23:27:28.496637480Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:27:28.496911 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:27:28.887155 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1479911387-merged.mount: Deactivated successfully. Jul 6 23:27:29.591984 containerd[1930]: time="2025-07-06T23:27:29.591930753Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:27:30.229267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount448233284.mount: Deactivated successfully. Jul 6 23:27:31.546293 containerd[1930]: time="2025-07-06T23:27:31.546212255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:31.549314 containerd[1930]: time="2025-07-06T23:27:31.549241571Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651793" Jul 6 23:27:31.551814 containerd[1930]: time="2025-07-06T23:27:31.551737367Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:31.556962 containerd[1930]: time="2025-07-06T23:27:31.556885079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:31.559782 containerd[1930]: time="2025-07-06T23:27:31.559486439Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.967496646s" Jul 6 23:27:31.559782 containerd[1930]: time="2025-07-06T23:27:31.559549259Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 6 23:27:31.562186 containerd[1930]: time="2025-07-06T23:27:31.562141523Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:27:32.946089 containerd[1930]: time="2025-07-06T23:27:32.945749894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:32.948745 containerd[1930]: time="2025-07-06T23:27:32.948685010Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459677" Jul 6 23:27:32.950182 containerd[1930]: time="2025-07-06T23:27:32.950128466Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:32.955176 containerd[1930]: time="2025-07-06T23:27:32.955100582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:32.961672 containerd[1930]: time="2025-07-06T23:27:32.961446278Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.399052623s" Jul 6 23:27:32.961672 containerd[1930]: time="2025-07-06T23:27:32.961507190Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 6 23:27:32.962536 containerd[1930]: time="2025-07-06T23:27:32.962410466Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:27:34.059452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:27:34.064085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:34.224983 containerd[1930]: time="2025-07-06T23:27:34.224922108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:34.228259 containerd[1930]: time="2025-07-06T23:27:34.228196992Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125066" Jul 6 23:27:34.231127 containerd[1930]: time="2025-07-06T23:27:34.230972268Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:34.241311 containerd[1930]: time="2025-07-06T23:27:34.241234836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:34.242695 containerd[1930]: time="2025-07-06T23:27:34.242507064Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.279729842s" Jul 6 23:27:34.242695 containerd[1930]: time="2025-07-06T23:27:34.242564616Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 6 23:27:34.243675 containerd[1930]: time="2025-07-06T23:27:34.243381984Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:27:34.430584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:34.444774 (kubelet)[2626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:27:34.529276 kubelet[2626]: E0706 23:27:34.529197 2626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:27:34.536596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:27:34.536917 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:27:34.537765 systemd[1]: kubelet.service: Consumed 323ms CPU time, 104.7M memory peak. Jul 6 23:27:35.551656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3215598000.mount: Deactivated successfully. Jul 6 23:27:36.035625 containerd[1930]: time="2025-07-06T23:27:36.035554225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:36.036708 containerd[1930]: time="2025-07-06T23:27:36.036653665Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jul 6 23:27:36.038323 containerd[1930]: time="2025-07-06T23:27:36.038265049Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:36.040994 containerd[1930]: time="2025-07-06T23:27:36.040947361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:36.042418 containerd[1930]: time="2025-07-06T23:27:36.042190309Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.798757049s" Jul 6 23:27:36.042418 containerd[1930]: time="2025-07-06T23:27:36.042245593Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 6 23:27:36.042937 containerd[1930]: time="2025-07-06T23:27:36.042891217Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:27:36.560703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2533707945.mount: Deactivated successfully. Jul 6 23:27:37.721798 containerd[1930]: time="2025-07-06T23:27:37.721742226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:37.725593 containerd[1930]: time="2025-07-06T23:27:37.725549934Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 6 23:27:37.727649 containerd[1930]: time="2025-07-06T23:27:37.727609698Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:37.735069 containerd[1930]: time="2025-07-06T23:27:37.734426142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:37.737349 containerd[1930]: time="2025-07-06T23:27:37.737305782Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.694337393s" Jul 6 23:27:37.737510 containerd[1930]: time="2025-07-06T23:27:37.737481942Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 6 23:27:37.738350 containerd[1930]: time="2025-07-06T23:27:37.738311106Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:27:38.210182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4038724723.mount: Deactivated successfully. Jul 6 23:27:38.222798 containerd[1930]: time="2025-07-06T23:27:38.222718636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:27:38.224625 containerd[1930]: time="2025-07-06T23:27:38.224556604Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 6 23:27:38.227109 containerd[1930]: time="2025-07-06T23:27:38.227009764Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:27:38.231667 containerd[1930]: time="2025-07-06T23:27:38.231565168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:27:38.233036 containerd[1930]: time="2025-07-06T23:27:38.232839688Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 494.151062ms" Jul 6 23:27:38.233036 containerd[1930]: time="2025-07-06T23:27:38.232892680Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:27:38.234143 containerd[1930]: time="2025-07-06T23:27:38.234032140Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:27:38.817214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3598976279.mount: Deactivated successfully. Jul 6 23:27:40.816092 containerd[1930]: time="2025-07-06T23:27:40.816001257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:40.818646 containerd[1930]: time="2025-07-06T23:27:40.818581569Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 6 23:27:40.820531 containerd[1930]: time="2025-07-06T23:27:40.820472625Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:40.826718 containerd[1930]: time="2025-07-06T23:27:40.826618209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:40.828840 containerd[1930]: time="2025-07-06T23:27:40.828637797Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.594332249s" Jul 6 23:27:40.828840 containerd[1930]: time="2025-07-06T23:27:40.828694125Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 6 23:27:44.559536 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:27:44.567371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:45.015273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:45.029587 (kubelet)[2781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:27:45.112086 kubelet[2781]: E0706 23:27:45.111982 2781 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:27:45.117238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:27:45.117706 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:27:45.118896 systemd[1]: kubelet.service: Consumed 297ms CPU time, 107.2M memory peak. Jul 6 23:27:48.049467 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:48.049839 systemd[1]: kubelet.service: Consumed 297ms CPU time, 107.2M memory peak. Jul 6 23:27:48.053699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:48.111661 systemd[1]: Reload requested from client PID 2795 ('systemctl') (unit session-7.scope)... Jul 6 23:27:48.111687 systemd[1]: Reloading... Jul 6 23:27:48.361112 zram_generator::config[2843]: No configuration found. Jul 6 23:27:48.549921 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:27:48.809281 systemd[1]: Reloading finished in 696 ms. Jul 6 23:27:48.907228 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:27:48.907448 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:27:48.908026 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:48.908157 systemd[1]: kubelet.service: Consumed 226ms CPU time, 95M memory peak. Jul 6 23:27:48.911788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:49.248592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:49.264883 (kubelet)[2904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:27:49.343337 kubelet[2904]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:27:49.343806 kubelet[2904]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:27:49.344074 kubelet[2904]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:27:49.344177 kubelet[2904]: I0706 23:27:49.344007 2904 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:27:50.655815 kubelet[2904]: I0706 23:27:50.655762 2904 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:27:50.657130 kubelet[2904]: I0706 23:27:50.656485 2904 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:27:50.657130 kubelet[2904]: I0706 23:27:50.656916 2904 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:27:50.700750 kubelet[2904]: I0706 23:27:50.700688 2904 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:27:50.703466 kubelet[2904]: E0706 23:27:50.703399 2904 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.26.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.116:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:27:50.720652 kubelet[2904]: I0706 23:27:50.720466 2904 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:27:50.727813 kubelet[2904]: I0706 23:27:50.727777 2904 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:27:50.728703 kubelet[2904]: I0706 23:27:50.728681 2904 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:27:50.729124 kubelet[2904]: I0706 23:27:50.729086 2904 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:27:50.729507 kubelet[2904]: I0706 23:27:50.729217 2904 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-116","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:27:50.729858 kubelet[2904]: I0706 23:27:50.729837 2904 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:27:50.729984 kubelet[2904]: I0706 23:27:50.729965 2904 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:27:50.730395 kubelet[2904]: I0706 23:27:50.730375 2904 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:27:50.735164 kubelet[2904]: I0706 23:27:50.735130 2904 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:27:50.735335 kubelet[2904]: I0706 23:27:50.735315 2904 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:27:50.735448 kubelet[2904]: I0706 23:27:50.735431 2904 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:27:50.735696 kubelet[2904]: I0706 23:27:50.735676 2904 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:27:50.739397 kubelet[2904]: W0706 23:27:50.739170 2904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-116&limit=500&resourceVersion=0": dial tcp 172.31.26.116:6443: connect: connection refused Jul 6 23:27:50.739550 kubelet[2904]: E0706 23:27:50.739414 2904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-116&limit=500&resourceVersion=0\": dial tcp 172.31.26.116:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:27:50.744590 kubelet[2904]: I0706 23:27:50.743248 2904 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:27:50.744590 kubelet[2904]: I0706 23:27:50.744402 2904 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:27:50.744783 kubelet[2904]: W0706 23:27:50.744737 2904 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:27:50.747083 kubelet[2904]: I0706 23:27:50.746591 2904 server.go:1274] "Started kubelet" Jul 6 23:27:50.747083 kubelet[2904]: W0706 23:27:50.746805 2904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.116:6443: connect: connection refused Jul 6 23:27:50.747083 kubelet[2904]: E0706 23:27:50.746881 2904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.116:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:27:50.752409 kubelet[2904]: I0706 23:27:50.752323 2904 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:27:50.753169 kubelet[2904]: I0706 23:27:50.753077 2904 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:27:50.753657 kubelet[2904]: I0706 23:27:50.753607 2904 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:27:50.754833 kubelet[2904]: I0706 23:27:50.754802 2904 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:27:50.756339 kubelet[2904]: E0706 23:27:50.754029 2904 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.116:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.116:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-116.184fcd43b4949a42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-116,UID:ip-172-31-26-116,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-116,},FirstTimestamp:2025-07-06 23:27:50.746552898 +0000 UTC m=+1.475440748,LastTimestamp:2025-07-06 23:27:50.746552898 +0000 UTC m=+1.475440748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-116,}" Jul 6 23:27:50.762781 kubelet[2904]: E0706 23:27:50.762678 2904 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:27:50.763472 kubelet[2904]: I0706 23:27:50.763336 2904 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:27:50.763573 kubelet[2904]: I0706 23:27:50.763493 2904 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:27:50.771102 kubelet[2904]: E0706 23:27:50.770256 2904 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-116\" not found" Jul 6 23:27:50.771102 kubelet[2904]: I0706 23:27:50.770325 2904 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:27:50.771102 kubelet[2904]: I0706 23:27:50.770693 2904 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:27:50.771102 kubelet[2904]: I0706 23:27:50.770788 2904 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:27:50.771756 kubelet[2904]: W0706 23:27:50.771646 2904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.116:6443: connect: connection refused Jul 6 23:27:50.771852 kubelet[2904]: E0706 23:27:50.771801 2904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.116:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:27:50.772795 kubelet[2904]: E0706 23:27:50.772713 2904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-116?timeout=10s\": dial tcp 172.31.26.116:6443: connect: connection refused" interval="200ms" Jul 6 23:27:50.773116 kubelet[2904]: I0706 23:27:50.773074 2904 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:27:50.773249 kubelet[2904]: I0706 23:27:50.773209 2904 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:27:50.775866 kubelet[2904]: I0706 23:27:50.775817 2904 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:27:50.810711 kubelet[2904]: I0706 23:27:50.810455 2904 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:27:50.813304 kubelet[2904]: I0706 23:27:50.813264 2904 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:27:50.813494 kubelet[2904]: I0706 23:27:50.813474 2904 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:27:50.813610 kubelet[2904]: I0706 23:27:50.813592 2904 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:27:50.813774 kubelet[2904]: E0706 23:27:50.813742 2904 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:27:50.820589 kubelet[2904]: I0706 23:27:50.820532 2904 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:27:50.820589 kubelet[2904]: I0706 23:27:50.820570 2904 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:27:50.820777 kubelet[2904]: I0706 23:27:50.820602 2904 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:27:50.822139 kubelet[2904]: W0706 23:27:50.821963 2904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.116:6443: connect: connection refused Jul 6 23:27:50.825870 kubelet[2904]: E0706 23:27:50.822032 2904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.116:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:27:50.826952 kubelet[2904]: I0706 23:27:50.826905 2904 policy_none.go:49] "None policy: Start" Jul 6 23:27:50.828081 kubelet[2904]: I0706 23:27:50.828024 2904 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:27:50.828215 kubelet[2904]: I0706 23:27:50.828145 2904 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:27:50.842912 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:27:50.863022 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:27:50.871022 kubelet[2904]: E0706 23:27:50.870373 2904 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-116\" not found" Jul 6 23:27:50.870756 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:27:50.880832 kubelet[2904]: I0706 23:27:50.880692 2904 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:27:50.881263 kubelet[2904]: I0706 23:27:50.881007 2904 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:27:50.881263 kubelet[2904]: I0706 23:27:50.881058 2904 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:27:50.884761 kubelet[2904]: I0706 23:27:50.882341 2904 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:27:50.887635 kubelet[2904]: E0706 23:27:50.887583 2904 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-116\" not found" Jul 6 23:27:50.934549 systemd[1]: Created slice kubepods-burstable-poddcc1e3f56046da4c0daba13bc9ea1dec.slice - libcontainer container kubepods-burstable-poddcc1e3f56046da4c0daba13bc9ea1dec.slice. Jul 6 23:27:50.963015 systemd[1]: Created slice kubepods-burstable-podae3ec351ae6d8859517262481ffd53cf.slice - libcontainer container kubepods-burstable-podae3ec351ae6d8859517262481ffd53cf.slice. Jul 6 23:27:50.974518 kubelet[2904]: E0706 23:27:50.974441 2904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-116?timeout=10s\": dial tcp 172.31.26.116:6443: connect: connection refused" interval="400ms" Jul 6 23:27:50.982360 systemd[1]: Created slice kubepods-burstable-pod9a6a1967590d07ecfb27aa6df4be3bd4.slice - libcontainer container kubepods-burstable-pod9a6a1967590d07ecfb27aa6df4be3bd4.slice. Jul 6 23:27:50.985267 kubelet[2904]: I0706 23:27:50.985180 2904 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-116" Jul 6 23:27:50.986083 kubelet[2904]: E0706 23:27:50.985974 2904 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.26.116:6443/api/v1/nodes\": dial tcp 172.31.26.116:6443: connect: connection refused" node="ip-172-31-26-116" Jul 6 23:27:51.071647 kubelet[2904]: I0706 23:27:51.071534 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae3ec351ae6d8859517262481ffd53cf-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-116\" (UID: \"ae3ec351ae6d8859517262481ffd53cf\") " pod="kube-system/kube-controller-manager-ip-172-31-26-116" Jul 6 23:27:51.071647 kubelet[2904]: I0706 23:27:51.071600 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae3ec351ae6d8859517262481ffd53cf-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-116\" (UID: \"ae3ec351ae6d8859517262481ffd53cf\") " pod="kube-system/kube-controller-manager-ip-172-31-26-116" Jul 6 23:27:51.071647 kubelet[2904]: I0706 23:27:51.071646 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dcc1e3f56046da4c0daba13bc9ea1dec-ca-certs\") pod \"kube-apiserver-ip-172-31-26-116\" (UID: \"dcc1e3f56046da4c0daba13bc9ea1dec\") " pod="kube-system/kube-apiserver-ip-172-31-26-116" Jul 6 23:27:51.071900 kubelet[2904]: I0706 23:27:51.071686 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dcc1e3f56046da4c0daba13bc9ea1dec-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-116\" (UID: \"dcc1e3f56046da4c0daba13bc9ea1dec\") " pod="kube-system/kube-apiserver-ip-172-31-26-116" Jul 6 23:27:51.071900 kubelet[2904]: I0706 23:27:51.071731 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dcc1e3f56046da4c0daba13bc9ea1dec-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-116\" (UID: \"dcc1e3f56046da4c0daba13bc9ea1dec\") " pod="kube-system/kube-apiserver-ip-172-31-26-116" Jul 6 23:27:51.071900 kubelet[2904]: I0706 23:27:51.071769 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae3ec351ae6d8859517262481ffd53cf-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-116\" (UID: \"ae3ec351ae6d8859517262481ffd53cf\") " pod="kube-system/kube-controller-manager-ip-172-31-26-116" Jul 6 23:27:51.071900 kubelet[2904]: I0706 23:27:51.071806 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ae3ec351ae6d8859517262481ffd53cf-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-116\" (UID: \"ae3ec351ae6d8859517262481ffd53cf\") " pod="kube-system/kube-controller-manager-ip-172-31-26-116" Jul 6 23:27:51.071900 kubelet[2904]: I0706 23:27:51.071841 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae3ec351ae6d8859517262481ffd53cf-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-116\" (UID: \"ae3ec351ae6d8859517262481ffd53cf\") " pod="kube-system/kube-controller-manager-ip-172-31-26-116" Jul 6 23:27:51.072187 kubelet[2904]: I0706 23:27:51.071876 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a6a1967590d07ecfb27aa6df4be3bd4-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-116\" (UID: \"9a6a1967590d07ecfb27aa6df4be3bd4\") " pod="kube-system/kube-scheduler-ip-172-31-26-116" Jul 6 23:27:51.188998 kubelet[2904]: I0706 23:27:51.188848 2904 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-116" Jul 6 23:27:51.189674 kubelet[2904]: E0706 23:27:51.189464 2904 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.26.116:6443/api/v1/nodes\": dial tcp 172.31.26.116:6443: connect: connection refused" node="ip-172-31-26-116" Jul 6 23:27:51.258646 containerd[1930]: time="2025-07-06T23:27:51.258298709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-116,Uid:dcc1e3f56046da4c0daba13bc9ea1dec,Namespace:kube-system,Attempt:0,}" Jul 6 23:27:51.276604 containerd[1930]: time="2025-07-06T23:27:51.276528593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-116,Uid:ae3ec351ae6d8859517262481ffd53cf,Namespace:kube-system,Attempt:0,}" Jul 6 23:27:51.290434 containerd[1930]: time="2025-07-06T23:27:51.290187521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-116,Uid:9a6a1967590d07ecfb27aa6df4be3bd4,Namespace:kube-system,Attempt:0,}" Jul 6 23:27:51.337902 containerd[1930]: time="2025-07-06T23:27:51.337820597Z" level=info msg="connecting to shim 57317c3ac0931e679ebb4729c15fcc9b1fe6ed27515358533f500c1c2b465f6c" address="unix:///run/containerd/s/b0f06677d952a93c5de1a52933f855e7edc13825869dc56029a471cd8717c33c" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:27:51.376807 kubelet[2904]: E0706 23:27:51.376722 2904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-116?timeout=10s\": dial tcp 172.31.26.116:6443: connect: connection refused" interval="800ms" Jul 6 23:27:51.405559 systemd[1]: Started cri-containerd-57317c3ac0931e679ebb4729c15fcc9b1fe6ed27515358533f500c1c2b465f6c.scope - libcontainer container 57317c3ac0931e679ebb4729c15fcc9b1fe6ed27515358533f500c1c2b465f6c. Jul 6 23:27:51.411397 containerd[1930]: time="2025-07-06T23:27:51.411246738Z" level=info msg="connecting to shim bc9b7aff6644403525d4af2949c53c42c1991878dd7ff2c7ee605742061c56d0" address="unix:///run/containerd/s/ede3dda257fe7f99f08cb2ce952af5c46ade3a2e2f594a846542d38e959ed7d8" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:27:51.418242 containerd[1930]: time="2025-07-06T23:27:51.418172370Z" level=info msg="connecting to shim 8912a35c5dce7a6c9c1dae6a256641d8ba2dc25a98fa08d27c9cad066146c659" address="unix:///run/containerd/s/935034e7bbdbaec568cd18462eee97f8bb618c39c91a2b02a00119b323d94949" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:27:51.490674 systemd[1]: Started cri-containerd-bc9b7aff6644403525d4af2949c53c42c1991878dd7ff2c7ee605742061c56d0.scope - libcontainer container bc9b7aff6644403525d4af2949c53c42c1991878dd7ff2c7ee605742061c56d0. Jul 6 23:27:51.518740 systemd[1]: Started cri-containerd-8912a35c5dce7a6c9c1dae6a256641d8ba2dc25a98fa08d27c9cad066146c659.scope - libcontainer container 8912a35c5dce7a6c9c1dae6a256641d8ba2dc25a98fa08d27c9cad066146c659. Jul 6 23:27:51.556362 containerd[1930]: time="2025-07-06T23:27:51.556302678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-116,Uid:dcc1e3f56046da4c0daba13bc9ea1dec,Namespace:kube-system,Attempt:0,} returns sandbox id \"57317c3ac0931e679ebb4729c15fcc9b1fe6ed27515358533f500c1c2b465f6c\"" Jul 6 23:27:51.567679 containerd[1930]: time="2025-07-06T23:27:51.567617574Z" level=info msg="CreateContainer within sandbox \"57317c3ac0931e679ebb4729c15fcc9b1fe6ed27515358533f500c1c2b465f6c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:27:51.593698 kubelet[2904]: I0706 23:27:51.593383 2904 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-116" Jul 6 23:27:51.595419 kubelet[2904]: E0706 23:27:51.595291 2904 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.26.116:6443/api/v1/nodes\": dial tcp 172.31.26.116:6443: connect: connection refused" node="ip-172-31-26-116" Jul 6 23:27:51.598060 containerd[1930]: time="2025-07-06T23:27:51.597971251Z" level=info msg="Container 758b69a588dee69ac4350dfc63cadbf8e3a3c2a872742fe83f610212d6734bfc: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:27:51.610029 kubelet[2904]: W0706 23:27:51.609712 2904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.116:6443: connect: connection refused Jul 6 23:27:51.610029 kubelet[2904]: E0706 23:27:51.609831 2904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.116:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:27:51.622004 containerd[1930]: time="2025-07-06T23:27:51.621801907Z" level=info msg="CreateContainer within sandbox \"57317c3ac0931e679ebb4729c15fcc9b1fe6ed27515358533f500c1c2b465f6c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"758b69a588dee69ac4350dfc63cadbf8e3a3c2a872742fe83f610212d6734bfc\"" Jul 6 23:27:51.623402 containerd[1930]: time="2025-07-06T23:27:51.623350651Z" level=info msg="StartContainer for \"758b69a588dee69ac4350dfc63cadbf8e3a3c2a872742fe83f610212d6734bfc\"" Jul 6 23:27:51.632743 containerd[1930]: time="2025-07-06T23:27:51.632268787Z" level=info msg="connecting to shim 758b69a588dee69ac4350dfc63cadbf8e3a3c2a872742fe83f610212d6734bfc" address="unix:///run/containerd/s/b0f06677d952a93c5de1a52933f855e7edc13825869dc56029a471cd8717c33c" protocol=ttrpc version=3 Jul 6 23:27:51.641350 containerd[1930]: time="2025-07-06T23:27:51.641261659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-116,Uid:ae3ec351ae6d8859517262481ffd53cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc9b7aff6644403525d4af2949c53c42c1991878dd7ff2c7ee605742061c56d0\"" Jul 6 23:27:51.645741 containerd[1930]: time="2025-07-06T23:27:51.645690487Z" level=info msg="CreateContainer within sandbox \"bc9b7aff6644403525d4af2949c53c42c1991878dd7ff2c7ee605742061c56d0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:27:51.669732 containerd[1930]: time="2025-07-06T23:27:51.669677623Z" level=info msg="Container c49a6fa2c53e1599b6b2c6872fbad456ce987a9d06c56a59e6fa98f9e4f77e42: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:27:51.671378 containerd[1930]: time="2025-07-06T23:27:51.671225215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-116,Uid:9a6a1967590d07ecfb27aa6df4be3bd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8912a35c5dce7a6c9c1dae6a256641d8ba2dc25a98fa08d27c9cad066146c659\"" Jul 6 23:27:51.673398 systemd[1]: Started cri-containerd-758b69a588dee69ac4350dfc63cadbf8e3a3c2a872742fe83f610212d6734bfc.scope - libcontainer container 758b69a588dee69ac4350dfc63cadbf8e3a3c2a872742fe83f610212d6734bfc. Jul 6 23:27:51.680063 containerd[1930]: time="2025-07-06T23:27:51.679985323Z" level=info msg="CreateContainer within sandbox \"8912a35c5dce7a6c9c1dae6a256641d8ba2dc25a98fa08d27c9cad066146c659\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:27:51.701495 containerd[1930]: time="2025-07-06T23:27:51.701425543Z" level=info msg="CreateContainer within sandbox \"bc9b7aff6644403525d4af2949c53c42c1991878dd7ff2c7ee605742061c56d0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c49a6fa2c53e1599b6b2c6872fbad456ce987a9d06c56a59e6fa98f9e4f77e42\"" Jul 6 23:27:51.703833 containerd[1930]: time="2025-07-06T23:27:51.703464223Z" level=info msg="StartContainer for \"c49a6fa2c53e1599b6b2c6872fbad456ce987a9d06c56a59e6fa98f9e4f77e42\"" Jul 6 23:27:51.706341 kubelet[2904]: W0706 23:27:51.706224 2904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-116&limit=500&resourceVersion=0": dial tcp 172.31.26.116:6443: connect: connection refused Jul 6 23:27:51.707323 kubelet[2904]: E0706 23:27:51.706377 2904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-116&limit=500&resourceVersion=0\": dial tcp 172.31.26.116:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:27:51.708270 containerd[1930]: time="2025-07-06T23:27:51.707863123Z" level=info msg="Container 44a42560153cf7b3c64ff61106cb443da2e0e335ddf70ffbce5009feffbf91ab: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:27:51.711466 containerd[1930]: time="2025-07-06T23:27:51.711396919Z" level=info msg="connecting to shim c49a6fa2c53e1599b6b2c6872fbad456ce987a9d06c56a59e6fa98f9e4f77e42" address="unix:///run/containerd/s/ede3dda257fe7f99f08cb2ce952af5c46ade3a2e2f594a846542d38e959ed7d8" protocol=ttrpc version=3 Jul 6 23:27:51.730289 containerd[1930]: time="2025-07-06T23:27:51.729716827Z" level=info msg="CreateContainer within sandbox \"8912a35c5dce7a6c9c1dae6a256641d8ba2dc25a98fa08d27c9cad066146c659\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"44a42560153cf7b3c64ff61106cb443da2e0e335ddf70ffbce5009feffbf91ab\"" Jul 6 23:27:51.732152 containerd[1930]: time="2025-07-06T23:27:51.731543731Z" level=info msg="StartContainer for \"44a42560153cf7b3c64ff61106cb443da2e0e335ddf70ffbce5009feffbf91ab\"" Jul 6 23:27:51.735083 containerd[1930]: time="2025-07-06T23:27:51.734988715Z" level=info msg="connecting to shim 44a42560153cf7b3c64ff61106cb443da2e0e335ddf70ffbce5009feffbf91ab" address="unix:///run/containerd/s/935034e7bbdbaec568cd18462eee97f8bb618c39c91a2b02a00119b323d94949" protocol=ttrpc version=3 Jul 6 23:27:51.791730 systemd[1]: Started cri-containerd-44a42560153cf7b3c64ff61106cb443da2e0e335ddf70ffbce5009feffbf91ab.scope - libcontainer container 44a42560153cf7b3c64ff61106cb443da2e0e335ddf70ffbce5009feffbf91ab. Jul 6 23:27:51.800783 systemd[1]: Started cri-containerd-c49a6fa2c53e1599b6b2c6872fbad456ce987a9d06c56a59e6fa98f9e4f77e42.scope - libcontainer container c49a6fa2c53e1599b6b2c6872fbad456ce987a9d06c56a59e6fa98f9e4f77e42. Jul 6 23:27:51.818506 containerd[1930]: time="2025-07-06T23:27:51.818383664Z" level=info msg="StartContainer for \"758b69a588dee69ac4350dfc63cadbf8e3a3c2a872742fe83f610212d6734bfc\" returns successfully" Jul 6 23:27:52.001967 containerd[1930]: time="2025-07-06T23:27:52.001878401Z" level=info msg="StartContainer for \"c49a6fa2c53e1599b6b2c6872fbad456ce987a9d06c56a59e6fa98f9e4f77e42\" returns successfully" Jul 6 23:27:52.008108 containerd[1930]: time="2025-07-06T23:27:52.007625609Z" level=info msg="StartContainer for \"44a42560153cf7b3c64ff61106cb443da2e0e335ddf70ffbce5009feffbf91ab\" returns successfully" Jul 6 23:27:52.028102 kubelet[2904]: W0706 23:27:52.027767 2904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.116:6443: connect: connection refused Jul 6 23:27:52.029219 kubelet[2904]: E0706 23:27:52.028294 2904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.116:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:27:52.045401 kubelet[2904]: W0706 23:27:52.045164 2904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.116:6443: connect: connection refused Jul 6 23:27:52.045401 kubelet[2904]: E0706 23:27:52.045263 2904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.116:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:27:52.371966 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 6 23:27:52.399253 kubelet[2904]: I0706 23:27:52.399199 2904 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-116" Jul 6 23:27:55.420440 kubelet[2904]: E0706 23:27:55.420378 2904 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-116\" not found" node="ip-172-31-26-116" Jul 6 23:27:55.438595 kubelet[2904]: I0706 23:27:55.438538 2904 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-26-116" Jul 6 23:27:55.438595 kubelet[2904]: E0706 23:27:55.438593 2904 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-26-116\": node \"ip-172-31-26-116\" not found" Jul 6 23:27:55.743030 kubelet[2904]: I0706 23:27:55.742878 2904 apiserver.go:52] "Watching apiserver" Jul 6 23:27:55.771320 kubelet[2904]: I0706 23:27:55.771260 2904 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:27:57.749717 systemd[1]: Reload requested from client PID 3176 ('systemctl') (unit session-7.scope)... Jul 6 23:27:57.749750 systemd[1]: Reloading... Jul 6 23:27:58.010118 zram_generator::config[3224]: No configuration found. Jul 6 23:27:58.234136 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:27:58.524501 systemd[1]: Reloading finished in 774 ms. Jul 6 23:27:58.597733 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:58.612671 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:27:58.613182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:58.613277 systemd[1]: kubelet.service: Consumed 2.215s CPU time, 128.8M memory peak. Jul 6 23:27:58.617674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:59.021862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:59.039628 (kubelet)[3280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:27:59.145931 kubelet[3280]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:27:59.145931 kubelet[3280]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:27:59.145931 kubelet[3280]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:27:59.145931 kubelet[3280]: I0706 23:27:59.145709 3280 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:27:59.157684 kubelet[3280]: I0706 23:27:59.157620 3280 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:27:59.157684 kubelet[3280]: I0706 23:27:59.157672 3280 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:27:59.158259 kubelet[3280]: I0706 23:27:59.158167 3280 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:27:59.162196 kubelet[3280]: I0706 23:27:59.162130 3280 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:27:59.171017 kubelet[3280]: I0706 23:27:59.170892 3280 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:27:59.187080 kubelet[3280]: I0706 23:27:59.187016 3280 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:27:59.194568 kubelet[3280]: I0706 23:27:59.194380 3280 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:27:59.195336 kubelet[3280]: I0706 23:27:59.194789 3280 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:27:59.195436 kubelet[3280]: I0706 23:27:59.195285 3280 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:27:59.195810 kubelet[3280]: I0706 23:27:59.195366 3280 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-116","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:27:59.195810 kubelet[3280]: I0706 23:27:59.195687 3280 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:27:59.195810 kubelet[3280]: I0706 23:27:59.195710 3280 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:27:59.195810 kubelet[3280]: I0706 23:27:59.195772 3280 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:27:59.196188 kubelet[3280]: I0706 23:27:59.196149 3280 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:27:59.196254 kubelet[3280]: I0706 23:27:59.196188 3280 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:27:59.196254 kubelet[3280]: I0706 23:27:59.196228 3280 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:27:59.196358 kubelet[3280]: I0706 23:27:59.196257 3280 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:27:59.200092 kubelet[3280]: I0706 23:27:59.199979 3280 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:27:59.202899 kubelet[3280]: I0706 23:27:59.202417 3280 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:27:59.204932 kubelet[3280]: I0706 23:27:59.204715 3280 server.go:1274] "Started kubelet" Jul 6 23:27:59.210946 kubelet[3280]: I0706 23:27:59.210881 3280 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:27:59.222637 kubelet[3280]: I0706 23:27:59.222483 3280 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:27:59.225360 kubelet[3280]: I0706 23:27:59.225299 3280 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:27:59.231644 kubelet[3280]: I0706 23:27:59.229755 3280 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:27:59.237230 kubelet[3280]: I0706 23:27:59.233588 3280 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:27:59.246935 kubelet[3280]: I0706 23:27:59.246837 3280 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:27:59.260093 kubelet[3280]: I0706 23:27:59.258024 3280 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:27:59.261966 kubelet[3280]: E0706 23:27:59.260721 3280 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-116\" not found" Jul 6 23:27:59.284192 kubelet[3280]: I0706 23:27:59.284072 3280 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:27:59.285612 kubelet[3280]: I0706 23:27:59.284540 3280 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:27:59.298320 kubelet[3280]: I0706 23:27:59.297692 3280 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:27:59.305089 kubelet[3280]: I0706 23:27:59.304610 3280 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:27:59.305089 kubelet[3280]: I0706 23:27:59.304656 3280 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:27:59.305089 kubelet[3280]: I0706 23:27:59.304687 3280 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:27:59.305089 kubelet[3280]: E0706 23:27:59.304756 3280 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:27:59.306412 kubelet[3280]: I0706 23:27:59.306293 3280 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:27:59.307071 kubelet[3280]: I0706 23:27:59.306839 3280 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:27:59.314549 kubelet[3280]: I0706 23:27:59.314511 3280 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:27:59.328529 kubelet[3280]: E0706 23:27:59.328488 3280 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:27:59.405306 kubelet[3280]: E0706 23:27:59.404954 3280 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:27:59.446669 kubelet[3280]: I0706 23:27:59.446612 3280 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:27:59.446669 kubelet[3280]: I0706 23:27:59.446648 3280 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:27:59.446859 kubelet[3280]: I0706 23:27:59.446685 3280 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:27:59.447021 kubelet[3280]: I0706 23:27:59.446944 3280 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:27:59.447118 kubelet[3280]: I0706 23:27:59.447009 3280 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:27:59.447118 kubelet[3280]: I0706 23:27:59.447066 3280 policy_none.go:49] "None policy: Start" Jul 6 23:27:59.448818 kubelet[3280]: I0706 23:27:59.448779 3280 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:27:59.448968 kubelet[3280]: I0706 23:27:59.448828 3280 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:27:59.450446 kubelet[3280]: I0706 23:27:59.450194 3280 state_mem.go:75] "Updated machine memory state" Jul 6 23:27:59.464217 kubelet[3280]: I0706 23:27:59.463950 3280 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:27:59.466614 kubelet[3280]: I0706 23:27:59.466563 3280 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:27:59.466754 kubelet[3280]: I0706 23:27:59.466600 3280 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:27:59.472080 kubelet[3280]: I0706 23:27:59.472009 3280 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:27:59.581898 kubelet[3280]: I0706 23:27:59.581829 3280 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-116" Jul 6 23:27:59.598477 kubelet[3280]: I0706 23:27:59.598425 3280 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-26-116" Jul 6 23:27:59.598643 kubelet[3280]: I0706 23:27:59.598544 3280 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-26-116" Jul 6 23:27:59.688757 kubelet[3280]: I0706 23:27:59.688689 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae3ec351ae6d8859517262481ffd53cf-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-116\" (UID: \"ae3ec351ae6d8859517262481ffd53cf\") " pod="kube-system/kube-controller-manager-ip-172-31-26-116" Jul 6 23:27:59.688757 kubelet[3280]: I0706 23:27:59.688774 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a6a1967590d07ecfb27aa6df4be3bd4-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-116\" (UID: \"9a6a1967590d07ecfb27aa6df4be3bd4\") " pod="kube-system/kube-scheduler-ip-172-31-26-116" Jul 6 23:27:59.690028 kubelet[3280]: I0706 23:27:59.688813 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dcc1e3f56046da4c0daba13bc9ea1dec-ca-certs\") pod \"kube-apiserver-ip-172-31-26-116\" (UID: \"dcc1e3f56046da4c0daba13bc9ea1dec\") " pod="kube-system/kube-apiserver-ip-172-31-26-116" Jul 6 23:27:59.690028 kubelet[3280]: I0706 23:27:59.688852 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dcc1e3f56046da4c0daba13bc9ea1dec-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-116\" (UID: \"dcc1e3f56046da4c0daba13bc9ea1dec\") " pod="kube-system/kube-apiserver-ip-172-31-26-116" Jul 6 23:27:59.690028 kubelet[3280]: I0706 23:27:59.688896 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae3ec351ae6d8859517262481ffd53cf-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-116\" (UID: \"ae3ec351ae6d8859517262481ffd53cf\") " pod="kube-system/kube-controller-manager-ip-172-31-26-116" Jul 6 23:27:59.690028 kubelet[3280]: I0706 23:27:59.688934 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae3ec351ae6d8859517262481ffd53cf-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-116\" (UID: \"ae3ec351ae6d8859517262481ffd53cf\") " pod="kube-system/kube-controller-manager-ip-172-31-26-116" Jul 6 23:27:59.690028 kubelet[3280]: I0706 23:27:59.688971 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dcc1e3f56046da4c0daba13bc9ea1dec-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-116\" (UID: \"dcc1e3f56046da4c0daba13bc9ea1dec\") " pod="kube-system/kube-apiserver-ip-172-31-26-116" Jul 6 23:27:59.690328 kubelet[3280]: I0706 23:27:59.689003 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae3ec351ae6d8859517262481ffd53cf-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-116\" (UID: \"ae3ec351ae6d8859517262481ffd53cf\") " pod="kube-system/kube-controller-manager-ip-172-31-26-116" Jul 6 23:27:59.690328 kubelet[3280]: I0706 23:27:59.689062 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ae3ec351ae6d8859517262481ffd53cf-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-116\" (UID: \"ae3ec351ae6d8859517262481ffd53cf\") " pod="kube-system/kube-controller-manager-ip-172-31-26-116" Jul 6 23:28:00.218510 kubelet[3280]: I0706 23:28:00.218439 3280 apiserver.go:52] "Watching apiserver" Jul 6 23:28:00.285976 kubelet[3280]: I0706 23:28:00.285918 3280 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:28:00.391184 kubelet[3280]: E0706 23:28:00.391119 3280 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-26-116\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-116" Jul 6 23:28:00.504239 kubelet[3280]: I0706 23:28:00.503405 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-116" podStartSLOduration=1.503382687 podStartE2EDuration="1.503382687s" podCreationTimestamp="2025-07-06 23:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:00.446976147 +0000 UTC m=+1.398244316" watchObservedRunningTime="2025-07-06 23:28:00.503382687 +0000 UTC m=+1.454650808" Jul 6 23:28:00.527263 kubelet[3280]: I0706 23:28:00.527174 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-116" podStartSLOduration=1.527125995 podStartE2EDuration="1.527125995s" podCreationTimestamp="2025-07-06 23:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:00.503653695 +0000 UTC m=+1.454921828" watchObservedRunningTime="2025-07-06 23:28:00.527125995 +0000 UTC m=+1.478394140" Jul 6 23:28:00.564719 kubelet[3280]: I0706 23:28:00.564379 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-116" podStartSLOduration=1.564226047 podStartE2EDuration="1.564226047s" podCreationTimestamp="2025-07-06 23:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:00.538513527 +0000 UTC m=+1.489781660" watchObservedRunningTime="2025-07-06 23:28:00.564226047 +0000 UTC m=+1.515494180" Jul 6 23:28:03.415014 kubelet[3280]: I0706 23:28:03.414964 3280 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:28:03.416174 containerd[1930]: time="2025-07-06T23:28:03.416101301Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:28:03.417375 kubelet[3280]: I0706 23:28:03.416991 3280 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:28:04.212693 kubelet[3280]: I0706 23:28:04.212026 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/038673f7-6166-4b8d-9ea9-ba9f692da0ac-kube-proxy\") pod \"kube-proxy-s9cbw\" (UID: \"038673f7-6166-4b8d-9ea9-ba9f692da0ac\") " pod="kube-system/kube-proxy-s9cbw" Jul 6 23:28:04.212693 kubelet[3280]: I0706 23:28:04.212109 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/038673f7-6166-4b8d-9ea9-ba9f692da0ac-xtables-lock\") pod \"kube-proxy-s9cbw\" (UID: \"038673f7-6166-4b8d-9ea9-ba9f692da0ac\") " pod="kube-system/kube-proxy-s9cbw" Jul 6 23:28:04.212693 kubelet[3280]: I0706 23:28:04.212150 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/038673f7-6166-4b8d-9ea9-ba9f692da0ac-lib-modules\") pod \"kube-proxy-s9cbw\" (UID: \"038673f7-6166-4b8d-9ea9-ba9f692da0ac\") " pod="kube-system/kube-proxy-s9cbw" Jul 6 23:28:04.212693 kubelet[3280]: I0706 23:28:04.212188 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vblnv\" (UniqueName: \"kubernetes.io/projected/038673f7-6166-4b8d-9ea9-ba9f692da0ac-kube-api-access-vblnv\") pod \"kube-proxy-s9cbw\" (UID: \"038673f7-6166-4b8d-9ea9-ba9f692da0ac\") " pod="kube-system/kube-proxy-s9cbw" Jul 6 23:28:04.226783 systemd[1]: Created slice kubepods-besteffort-pod038673f7_6166_4b8d_9ea9_ba9f692da0ac.slice - libcontainer container kubepods-besteffort-pod038673f7_6166_4b8d_9ea9_ba9f692da0ac.slice. Jul 6 23:28:04.514445 kubelet[3280]: I0706 23:28:04.513978 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2abd31d8-a529-4c1f-9e76-1d1a948fac12-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-s2bs5\" (UID: \"2abd31d8-a529-4c1f-9e76-1d1a948fac12\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-s2bs5" Jul 6 23:28:04.517083 kubelet[3280]: I0706 23:28:04.516143 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4j67\" (UniqueName: \"kubernetes.io/projected/2abd31d8-a529-4c1f-9e76-1d1a948fac12-kube-api-access-z4j67\") pod \"tigera-operator-5bf8dfcb4-s2bs5\" (UID: \"2abd31d8-a529-4c1f-9e76-1d1a948fac12\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-s2bs5" Jul 6 23:28:04.522904 systemd[1]: Created slice kubepods-besteffort-pod2abd31d8_a529_4c1f_9e76_1d1a948fac12.slice - libcontainer container kubepods-besteffort-pod2abd31d8_a529_4c1f_9e76_1d1a948fac12.slice. Jul 6 23:28:04.542030 containerd[1930]: time="2025-07-06T23:28:04.541883635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s9cbw,Uid:038673f7-6166-4b8d-9ea9-ba9f692da0ac,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:04.599143 containerd[1930]: time="2025-07-06T23:28:04.598221691Z" level=info msg="connecting to shim 11a72fc82d3dac0a0ff5dd24f1787aff1d31f2ca0d0cf7391a7b96c9df934db3" address="unix:///run/containerd/s/33e5c8d55c3ba429d34f6d18d07c09a67e64cf204f1e3d1d607fed052553d0fa" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:04.661356 systemd[1]: Started cri-containerd-11a72fc82d3dac0a0ff5dd24f1787aff1d31f2ca0d0cf7391a7b96c9df934db3.scope - libcontainer container 11a72fc82d3dac0a0ff5dd24f1787aff1d31f2ca0d0cf7391a7b96c9df934db3. Jul 6 23:28:04.714970 containerd[1930]: time="2025-07-06T23:28:04.714878192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s9cbw,Uid:038673f7-6166-4b8d-9ea9-ba9f692da0ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"11a72fc82d3dac0a0ff5dd24f1787aff1d31f2ca0d0cf7391a7b96c9df934db3\"" Jul 6 23:28:04.723090 containerd[1930]: time="2025-07-06T23:28:04.722016440Z" level=info msg="CreateContainer within sandbox \"11a72fc82d3dac0a0ff5dd24f1787aff1d31f2ca0d0cf7391a7b96c9df934db3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:28:04.747878 containerd[1930]: time="2025-07-06T23:28:04.747804452Z" level=info msg="Container c6044b73e4b15b3acc7e8c455275eb391b75f4c20a0ca6babc3d5c3db3180ee8: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:04.766248 containerd[1930]: time="2025-07-06T23:28:04.765673268Z" level=info msg="CreateContainer within sandbox \"11a72fc82d3dac0a0ff5dd24f1787aff1d31f2ca0d0cf7391a7b96c9df934db3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c6044b73e4b15b3acc7e8c455275eb391b75f4c20a0ca6babc3d5c3db3180ee8\"" Jul 6 23:28:04.768802 containerd[1930]: time="2025-07-06T23:28:04.768734456Z" level=info msg="StartContainer for \"c6044b73e4b15b3acc7e8c455275eb391b75f4c20a0ca6babc3d5c3db3180ee8\"" Jul 6 23:28:04.784064 containerd[1930]: time="2025-07-06T23:28:04.783956624Z" level=info msg="connecting to shim c6044b73e4b15b3acc7e8c455275eb391b75f4c20a0ca6babc3d5c3db3180ee8" address="unix:///run/containerd/s/33e5c8d55c3ba429d34f6d18d07c09a67e64cf204f1e3d1d607fed052553d0fa" protocol=ttrpc version=3 Jul 6 23:28:04.826347 systemd[1]: Started cri-containerd-c6044b73e4b15b3acc7e8c455275eb391b75f4c20a0ca6babc3d5c3db3180ee8.scope - libcontainer container c6044b73e4b15b3acc7e8c455275eb391b75f4c20a0ca6babc3d5c3db3180ee8. Jul 6 23:28:04.833133 containerd[1930]: time="2025-07-06T23:28:04.833019608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-s2bs5,Uid:2abd31d8-a529-4c1f-9e76-1d1a948fac12,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:28:04.875293 containerd[1930]: time="2025-07-06T23:28:04.875218269Z" level=info msg="connecting to shim 409f40e7ce8bcbb200b4998a332c39eafd79062145ec33cd2d52bc62b3229d9e" address="unix:///run/containerd/s/fbd6a8316a983261177e5678de529ea33736f5208acb58167e0588691e27fe75" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:04.935365 systemd[1]: Started cri-containerd-409f40e7ce8bcbb200b4998a332c39eafd79062145ec33cd2d52bc62b3229d9e.scope - libcontainer container 409f40e7ce8bcbb200b4998a332c39eafd79062145ec33cd2d52bc62b3229d9e. Jul 6 23:28:04.936457 containerd[1930]: time="2025-07-06T23:28:04.936100305Z" level=info msg="StartContainer for \"c6044b73e4b15b3acc7e8c455275eb391b75f4c20a0ca6babc3d5c3db3180ee8\" returns successfully" Jul 6 23:28:04.982181 update_engine[1902]: I20250706 23:28:04.980096 1902 update_attempter.cc:509] Updating boot flags... Jul 6 23:28:05.154875 containerd[1930]: time="2025-07-06T23:28:05.154685982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-s2bs5,Uid:2abd31d8-a529-4c1f-9e76-1d1a948fac12,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"409f40e7ce8bcbb200b4998a332c39eafd79062145ec33cd2d52bc62b3229d9e\"" Jul 6 23:28:05.160526 containerd[1930]: time="2025-07-06T23:28:05.160456854Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:28:06.611969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3108416769.mount: Deactivated successfully. Jul 6 23:28:07.373631 containerd[1930]: time="2025-07-06T23:28:07.373570653Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:07.376352 containerd[1930]: time="2025-07-06T23:28:07.376302105Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 6 23:28:07.378865 containerd[1930]: time="2025-07-06T23:28:07.378772101Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:07.384272 containerd[1930]: time="2025-07-06T23:28:07.384191541Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:07.385062 containerd[1930]: time="2025-07-06T23:28:07.384950265Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.224427339s" Jul 6 23:28:07.385062 containerd[1930]: time="2025-07-06T23:28:07.385005789Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 6 23:28:07.391951 containerd[1930]: time="2025-07-06T23:28:07.391858509Z" level=info msg="CreateContainer within sandbox \"409f40e7ce8bcbb200b4998a332c39eafd79062145ec33cd2d52bc62b3229d9e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:28:07.412083 containerd[1930]: time="2025-07-06T23:28:07.410756769Z" level=info msg="Container f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:07.427712 containerd[1930]: time="2025-07-06T23:28:07.427660965Z" level=info msg="CreateContainer within sandbox \"409f40e7ce8bcbb200b4998a332c39eafd79062145ec33cd2d52bc62b3229d9e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393\"" Jul 6 23:28:07.429641 containerd[1930]: time="2025-07-06T23:28:07.429580005Z" level=info msg="StartContainer for \"f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393\"" Jul 6 23:28:07.434860 containerd[1930]: time="2025-07-06T23:28:07.434795997Z" level=info msg="connecting to shim f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393" address="unix:///run/containerd/s/fbd6a8316a983261177e5678de529ea33736f5208acb58167e0588691e27fe75" protocol=ttrpc version=3 Jul 6 23:28:07.481434 systemd[1]: Started cri-containerd-f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393.scope - libcontainer container f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393. Jul 6 23:28:07.541566 containerd[1930]: time="2025-07-06T23:28:07.541131334Z" level=info msg="StartContainer for \"f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393\" returns successfully" Jul 6 23:28:08.465631 kubelet[3280]: I0706 23:28:08.465526 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s9cbw" podStartSLOduration=4.465502006 podStartE2EDuration="4.465502006s" podCreationTimestamp="2025-07-06 23:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:05.459257755 +0000 UTC m=+6.410525900" watchObservedRunningTime="2025-07-06 23:28:08.465502006 +0000 UTC m=+9.416770151" Jul 6 23:28:08.657159 kubelet[3280]: I0706 23:28:08.657029 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-s2bs5" podStartSLOduration=2.4287914 podStartE2EDuration="4.657005879s" podCreationTimestamp="2025-07-06 23:28:04 +0000 UTC" firstStartedPulling="2025-07-06 23:28:05.15934599 +0000 UTC m=+6.110614123" lastFinishedPulling="2025-07-06 23:28:07.387560469 +0000 UTC m=+8.338828602" observedRunningTime="2025-07-06 23:28:08.466950262 +0000 UTC m=+9.418218419" watchObservedRunningTime="2025-07-06 23:28:08.657005879 +0000 UTC m=+9.608274012" Jul 6 23:28:16.223780 sudo[2334]: pam_unix(sudo:session): session closed for user root Jul 6 23:28:16.249209 sshd[2333]: Connection closed by 139.178.89.65 port 41740 Jul 6 23:28:16.250065 sshd-session[2329]: pam_unix(sshd:session): session closed for user core Jul 6 23:28:16.262126 systemd-logind[1901]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:28:16.264125 systemd[1]: sshd@6-172.31.26.116:22-139.178.89.65:41740.service: Deactivated successfully. Jul 6 23:28:16.273118 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:28:16.275987 systemd[1]: session-7.scope: Consumed 10.854s CPU time, 231.4M memory peak. Jul 6 23:28:16.281149 systemd-logind[1901]: Removed session 7. Jul 6 23:28:26.330565 systemd[1]: Created slice kubepods-besteffort-podb17b5e7b_1db5_4520_860d_9c9cd8d01ecb.slice - libcontainer container kubepods-besteffort-podb17b5e7b_1db5_4520_860d_9c9cd8d01ecb.slice. Jul 6 23:28:26.374406 kubelet[3280]: I0706 23:28:26.374336 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b17b5e7b-1db5-4520-860d-9c9cd8d01ecb-tigera-ca-bundle\") pod \"calico-typha-768c9bf68f-jbx42\" (UID: \"b17b5e7b-1db5-4520-860d-9c9cd8d01ecb\") " pod="calico-system/calico-typha-768c9bf68f-jbx42" Jul 6 23:28:26.374406 kubelet[3280]: I0706 23:28:26.374418 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xf8r\" (UniqueName: \"kubernetes.io/projected/b17b5e7b-1db5-4520-860d-9c9cd8d01ecb-kube-api-access-2xf8r\") pod \"calico-typha-768c9bf68f-jbx42\" (UID: \"b17b5e7b-1db5-4520-860d-9c9cd8d01ecb\") " pod="calico-system/calico-typha-768c9bf68f-jbx42" Jul 6 23:28:26.375022 kubelet[3280]: I0706 23:28:26.374463 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b17b5e7b-1db5-4520-860d-9c9cd8d01ecb-typha-certs\") pod \"calico-typha-768c9bf68f-jbx42\" (UID: \"b17b5e7b-1db5-4520-860d-9c9cd8d01ecb\") " pod="calico-system/calico-typha-768c9bf68f-jbx42" Jul 6 23:28:26.640310 containerd[1930]: time="2025-07-06T23:28:26.639350441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-768c9bf68f-jbx42,Uid:b17b5e7b-1db5-4520-860d-9c9cd8d01ecb,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:26.704354 containerd[1930]: time="2025-07-06T23:28:26.703903313Z" level=info msg="connecting to shim d34d44b31069ddd314b9097f5333b70c42cbdac8a62ff6f1fa5421ab3c15cb74" address="unix:///run/containerd/s/7d22215a8476007be2b9ac199b687d5dd596460b33f43112be01356d8cc56a86" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:26.784420 systemd[1]: Started cri-containerd-d34d44b31069ddd314b9097f5333b70c42cbdac8a62ff6f1fa5421ab3c15cb74.scope - libcontainer container d34d44b31069ddd314b9097f5333b70c42cbdac8a62ff6f1fa5421ab3c15cb74. Jul 6 23:28:26.819672 systemd[1]: Created slice kubepods-besteffort-pod61a90870_e4b9_42cd_b617_f81c5f0e84d2.slice - libcontainer container kubepods-besteffort-pod61a90870_e4b9_42cd_b617_f81c5f0e84d2.slice. Jul 6 23:28:26.877542 kubelet[3280]: I0706 23:28:26.877478 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/61a90870-e4b9-42cd-b617-f81c5f0e84d2-cni-log-dir\") pod \"calico-node-n6xqb\" (UID: \"61a90870-e4b9-42cd-b617-f81c5f0e84d2\") " pod="calico-system/calico-node-n6xqb" Jul 6 23:28:26.877542 kubelet[3280]: I0706 23:28:26.877553 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/61a90870-e4b9-42cd-b617-f81c5f0e84d2-node-certs\") pod \"calico-node-n6xqb\" (UID: \"61a90870-e4b9-42cd-b617-f81c5f0e84d2\") " pod="calico-system/calico-node-n6xqb" Jul 6 23:28:26.877794 kubelet[3280]: I0706 23:28:26.877589 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/61a90870-e4b9-42cd-b617-f81c5f0e84d2-var-run-calico\") pod \"calico-node-n6xqb\" (UID: \"61a90870-e4b9-42cd-b617-f81c5f0e84d2\") " pod="calico-system/calico-node-n6xqb" Jul 6 23:28:26.877794 kubelet[3280]: I0706 23:28:26.877631 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61a90870-e4b9-42cd-b617-f81c5f0e84d2-xtables-lock\") pod \"calico-node-n6xqb\" (UID: \"61a90870-e4b9-42cd-b617-f81c5f0e84d2\") " pod="calico-system/calico-node-n6xqb" Jul 6 23:28:26.877794 kubelet[3280]: I0706 23:28:26.877670 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61a90870-e4b9-42cd-b617-f81c5f0e84d2-lib-modules\") pod \"calico-node-n6xqb\" (UID: \"61a90870-e4b9-42cd-b617-f81c5f0e84d2\") " pod="calico-system/calico-node-n6xqb" Jul 6 23:28:26.877794 kubelet[3280]: I0706 23:28:26.877726 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/61a90870-e4b9-42cd-b617-f81c5f0e84d2-policysync\") pod \"calico-node-n6xqb\" (UID: \"61a90870-e4b9-42cd-b617-f81c5f0e84d2\") " pod="calico-system/calico-node-n6xqb" Jul 6 23:28:26.877794 kubelet[3280]: I0706 23:28:26.877762 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/61a90870-e4b9-42cd-b617-f81c5f0e84d2-var-lib-calico\") pod \"calico-node-n6xqb\" (UID: \"61a90870-e4b9-42cd-b617-f81c5f0e84d2\") " pod="calico-system/calico-node-n6xqb" Jul 6 23:28:26.878394 kubelet[3280]: I0706 23:28:26.877821 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61a90870-e4b9-42cd-b617-f81c5f0e84d2-tigera-ca-bundle\") pod \"calico-node-n6xqb\" (UID: \"61a90870-e4b9-42cd-b617-f81c5f0e84d2\") " pod="calico-system/calico-node-n6xqb" Jul 6 23:28:26.878394 kubelet[3280]: I0706 23:28:26.877859 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/61a90870-e4b9-42cd-b617-f81c5f0e84d2-flexvol-driver-host\") pod \"calico-node-n6xqb\" (UID: \"61a90870-e4b9-42cd-b617-f81c5f0e84d2\") " pod="calico-system/calico-node-n6xqb" Jul 6 23:28:26.878394 kubelet[3280]: I0706 23:28:26.877900 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsrqg\" (UniqueName: \"kubernetes.io/projected/61a90870-e4b9-42cd-b617-f81c5f0e84d2-kube-api-access-gsrqg\") pod \"calico-node-n6xqb\" (UID: \"61a90870-e4b9-42cd-b617-f81c5f0e84d2\") " pod="calico-system/calico-node-n6xqb" Jul 6 23:28:26.878394 kubelet[3280]: I0706 23:28:26.877936 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/61a90870-e4b9-42cd-b617-f81c5f0e84d2-cni-net-dir\") pod \"calico-node-n6xqb\" (UID: \"61a90870-e4b9-42cd-b617-f81c5f0e84d2\") " pod="calico-system/calico-node-n6xqb" Jul 6 23:28:26.878394 kubelet[3280]: I0706 23:28:26.877970 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/61a90870-e4b9-42cd-b617-f81c5f0e84d2-cni-bin-dir\") pod \"calico-node-n6xqb\" (UID: \"61a90870-e4b9-42cd-b617-f81c5f0e84d2\") " pod="calico-system/calico-node-n6xqb" Jul 6 23:28:26.986527 kubelet[3280]: E0706 23:28:26.986364 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:26.986527 kubelet[3280]: W0706 23:28:26.986403 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:26.987458 kubelet[3280]: E0706 23:28:26.987366 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.003350 kubelet[3280]: E0706 23:28:27.003302 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.003350 kubelet[3280]: W0706 23:28:27.003340 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.003588 kubelet[3280]: E0706 23:28:27.003375 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.022249 kubelet[3280]: E0706 23:28:27.022194 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.022249 kubelet[3280]: W0706 23:28:27.022237 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.022249 kubelet[3280]: E0706 23:28:27.022271 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.071726 kubelet[3280]: E0706 23:28:27.071397 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h7wc5" podUID="c1099aef-6f47-4c54-92f1-abdaae830d6d" Jul 6 23:28:27.075078 kubelet[3280]: E0706 23:28:27.075012 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.075463 kubelet[3280]: W0706 23:28:27.075414 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.075610 kubelet[3280]: E0706 23:28:27.075467 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.078464 kubelet[3280]: E0706 23:28:27.078412 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.078464 kubelet[3280]: W0706 23:28:27.078456 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.079190 kubelet[3280]: E0706 23:28:27.078491 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.079551 kubelet[3280]: E0706 23:28:27.079505 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.079551 kubelet[3280]: W0706 23:28:27.079543 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.079731 kubelet[3280]: E0706 23:28:27.079578 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.079967 kubelet[3280]: E0706 23:28:27.079932 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.079967 kubelet[3280]: W0706 23:28:27.079961 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.080247 kubelet[3280]: E0706 23:28:27.079985 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.080915 kubelet[3280]: E0706 23:28:27.080843 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.080915 kubelet[3280]: W0706 23:28:27.080906 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.081298 kubelet[3280]: E0706 23:28:27.080938 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.083112 kubelet[3280]: E0706 23:28:27.082290 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.083112 kubelet[3280]: W0706 23:28:27.082329 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.083112 kubelet[3280]: E0706 23:28:27.082361 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.086141 kubelet[3280]: E0706 23:28:27.084765 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.086141 kubelet[3280]: W0706 23:28:27.084804 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.086141 kubelet[3280]: E0706 23:28:27.084838 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.088530 kubelet[3280]: E0706 23:28:27.088492 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.089879 kubelet[3280]: W0706 23:28:27.088719 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.089879 kubelet[3280]: E0706 23:28:27.088765 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.091371 kubelet[3280]: E0706 23:28:27.091335 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.092077 kubelet[3280]: W0706 23:28:27.091712 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.092077 kubelet[3280]: E0706 23:28:27.091791 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.094781 kubelet[3280]: E0706 23:28:27.094311 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.094781 kubelet[3280]: W0706 23:28:27.094363 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.094781 kubelet[3280]: E0706 23:28:27.094398 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.096351 kubelet[3280]: E0706 23:28:27.096281 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.096351 kubelet[3280]: W0706 23:28:27.096340 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.096542 kubelet[3280]: E0706 23:28:27.096375 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.097459 kubelet[3280]: E0706 23:28:27.097415 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.097459 kubelet[3280]: W0706 23:28:27.097455 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.097620 kubelet[3280]: E0706 23:28:27.097489 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.103358 kubelet[3280]: E0706 23:28:27.103305 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.103479 kubelet[3280]: W0706 23:28:27.103355 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.103479 kubelet[3280]: E0706 23:28:27.103403 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.105076 kubelet[3280]: E0706 23:28:27.105012 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.105201 kubelet[3280]: W0706 23:28:27.105093 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.105201 kubelet[3280]: E0706 23:28:27.105129 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.106902 kubelet[3280]: E0706 23:28:27.106401 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.106902 kubelet[3280]: W0706 23:28:27.106443 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.106902 kubelet[3280]: E0706 23:28:27.106476 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.107332 kubelet[3280]: E0706 23:28:27.107280 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.107332 kubelet[3280]: W0706 23:28:27.107318 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.107475 kubelet[3280]: E0706 23:28:27.107352 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.108639 kubelet[3280]: E0706 23:28:27.108590 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.108639 kubelet[3280]: W0706 23:28:27.108630 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.108832 kubelet[3280]: E0706 23:28:27.108664 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.110447 kubelet[3280]: E0706 23:28:27.110353 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.110447 kubelet[3280]: W0706 23:28:27.110392 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.110447 kubelet[3280]: E0706 23:28:27.110424 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.111303 kubelet[3280]: E0706 23:28:27.111258 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.111303 kubelet[3280]: W0706 23:28:27.111296 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.111453 kubelet[3280]: E0706 23:28:27.111328 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.113196 kubelet[3280]: E0706 23:28:27.113125 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.113196 kubelet[3280]: W0706 23:28:27.113167 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.113476 kubelet[3280]: E0706 23:28:27.113216 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.113909 kubelet[3280]: E0706 23:28:27.113865 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.113909 kubelet[3280]: W0706 23:28:27.113899 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.114347 kubelet[3280]: E0706 23:28:27.113928 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.114347 kubelet[3280]: I0706 23:28:27.113982 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjjr8\" (UniqueName: \"kubernetes.io/projected/c1099aef-6f47-4c54-92f1-abdaae830d6d-kube-api-access-kjjr8\") pod \"csi-node-driver-h7wc5\" (UID: \"c1099aef-6f47-4c54-92f1-abdaae830d6d\") " pod="calico-system/csi-node-driver-h7wc5" Jul 6 23:28:27.115114 kubelet[3280]: E0706 23:28:27.114905 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.115114 kubelet[3280]: W0706 23:28:27.114939 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.115114 kubelet[3280]: E0706 23:28:27.114983 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.116027 kubelet[3280]: E0706 23:28:27.115980 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.116639 kubelet[3280]: W0706 23:28:27.116195 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.116639 kubelet[3280]: E0706 23:28:27.116418 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.116944 kubelet[3280]: E0706 23:28:27.116919 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.117124 kubelet[3280]: W0706 23:28:27.117076 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.117303 kubelet[3280]: E0706 23:28:27.117241 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.117486 kubelet[3280]: I0706 23:28:27.117427 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c1099aef-6f47-4c54-92f1-abdaae830d6d-varrun\") pod \"csi-node-driver-h7wc5\" (UID: \"c1099aef-6f47-4c54-92f1-abdaae830d6d\") " pod="calico-system/csi-node-driver-h7wc5" Jul 6 23:28:27.119722 kubelet[3280]: E0706 23:28:27.119655 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.119722 kubelet[3280]: W0706 23:28:27.119703 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.119954 kubelet[3280]: E0706 23:28:27.119751 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.122292 kubelet[3280]: E0706 23:28:27.122235 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.122292 kubelet[3280]: W0706 23:28:27.122275 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.122618 kubelet[3280]: E0706 23:28:27.122562 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.126582 kubelet[3280]: E0706 23:28:27.126138 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.126582 kubelet[3280]: W0706 23:28:27.126567 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.126877 kubelet[3280]: E0706 23:28:27.126603 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.126877 kubelet[3280]: I0706 23:28:27.126660 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c1099aef-6f47-4c54-92f1-abdaae830d6d-registration-dir\") pod \"csi-node-driver-h7wc5\" (UID: \"c1099aef-6f47-4c54-92f1-abdaae830d6d\") " pod="calico-system/csi-node-driver-h7wc5" Jul 6 23:28:27.127463 kubelet[3280]: E0706 23:28:27.127413 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.127463 kubelet[3280]: W0706 23:28:27.127454 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.127789 kubelet[3280]: E0706 23:28:27.127499 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.127789 kubelet[3280]: I0706 23:28:27.127540 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1099aef-6f47-4c54-92f1-abdaae830d6d-kubelet-dir\") pod \"csi-node-driver-h7wc5\" (UID: \"c1099aef-6f47-4c54-92f1-abdaae830d6d\") " pod="calico-system/csi-node-driver-h7wc5" Jul 6 23:28:27.128167 kubelet[3280]: E0706 23:28:27.128139 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.128456 kubelet[3280]: W0706 23:28:27.128268 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.128456 kubelet[3280]: E0706 23:28:27.128306 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.129231 kubelet[3280]: E0706 23:28:27.129196 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.129628 kubelet[3280]: W0706 23:28:27.129508 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.130466 kubelet[3280]: E0706 23:28:27.130133 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.131036 kubelet[3280]: E0706 23:28:27.130974 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.131444 kubelet[3280]: W0706 23:28:27.131345 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.132142 kubelet[3280]: E0706 23:28:27.131908 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.132920 kubelet[3280]: I0706 23:28:27.132832 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c1099aef-6f47-4c54-92f1-abdaae830d6d-socket-dir\") pod \"csi-node-driver-h7wc5\" (UID: \"c1099aef-6f47-4c54-92f1-abdaae830d6d\") " pod="calico-system/csi-node-driver-h7wc5" Jul 6 23:28:27.133211 kubelet[3280]: E0706 23:28:27.133187 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.133365 kubelet[3280]: W0706 23:28:27.133338 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.133704 kubelet[3280]: E0706 23:28:27.133662 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.135351 kubelet[3280]: E0706 23:28:27.134786 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.135351 kubelet[3280]: W0706 23:28:27.134827 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.135351 kubelet[3280]: E0706 23:28:27.134877 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.135561 containerd[1930]: time="2025-07-06T23:28:27.134879595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n6xqb,Uid:61a90870-e4b9-42cd-b617-f81c5f0e84d2,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:27.136367 kubelet[3280]: E0706 23:28:27.136251 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.136367 kubelet[3280]: W0706 23:28:27.136292 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.136367 kubelet[3280]: E0706 23:28:27.136325 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.137179 kubelet[3280]: E0706 23:28:27.137127 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.137179 kubelet[3280]: W0706 23:28:27.137162 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.137375 kubelet[3280]: E0706 23:28:27.137194 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.188369 containerd[1930]: time="2025-07-06T23:28:27.188277303Z" level=info msg="connecting to shim 7d2a589b8da3ebdf1e66a965fab2979ba092a1df49e2cd5b85e70ca02dc83b95" address="unix:///run/containerd/s/7e9b817bce14b31fb32b3f64c0fdacec88c471f4d4113c22babeaa2449129297" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:27.234077 kubelet[3280]: E0706 23:28:27.233958 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.234642 kubelet[3280]: W0706 23:28:27.234238 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.234642 kubelet[3280]: E0706 23:28:27.234314 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.235416 kubelet[3280]: E0706 23:28:27.235297 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.235416 kubelet[3280]: W0706 23:28:27.235362 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.235777 kubelet[3280]: E0706 23:28:27.235695 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.238407 kubelet[3280]: E0706 23:28:27.238183 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.238407 kubelet[3280]: W0706 23:28:27.238225 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.238947 kubelet[3280]: E0706 23:28:27.238398 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.239293 kubelet[3280]: E0706 23:28:27.239016 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.239293 kubelet[3280]: W0706 23:28:27.239130 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.239293 kubelet[3280]: E0706 23:28:27.239201 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.241485 kubelet[3280]: E0706 23:28:27.241437 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.241485 kubelet[3280]: W0706 23:28:27.241474 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.241855 kubelet[3280]: E0706 23:28:27.241517 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.243185 kubelet[3280]: E0706 23:28:27.242170 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.243185 kubelet[3280]: W0706 23:28:27.242201 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.243185 kubelet[3280]: E0706 23:28:27.242941 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.243185 kubelet[3280]: W0706 23:28:27.242961 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.243185 kubelet[3280]: E0706 23:28:27.243093 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.243185 kubelet[3280]: E0706 23:28:27.243126 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.244758 kubelet[3280]: E0706 23:28:27.244483 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.244758 kubelet[3280]: W0706 23:28:27.244519 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.245617 kubelet[3280]: E0706 23:28:27.245545 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.248719 kubelet[3280]: E0706 23:28:27.247235 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.248719 kubelet[3280]: W0706 23:28:27.247286 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.248719 kubelet[3280]: E0706 23:28:27.248540 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.248719 kubelet[3280]: W0706 23:28:27.248566 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.249719 kubelet[3280]: E0706 23:28:27.249093 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.249719 kubelet[3280]: E0706 23:28:27.249145 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.252246 kubelet[3280]: E0706 23:28:27.252194 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.252246 kubelet[3280]: W0706 23:28:27.252235 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.252488 kubelet[3280]: E0706 23:28:27.252361 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.252738 kubelet[3280]: E0706 23:28:27.252703 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.252738 kubelet[3280]: W0706 23:28:27.252732 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.252879 kubelet[3280]: E0706 23:28:27.252843 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.254264 kubelet[3280]: E0706 23:28:27.254204 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.254264 kubelet[3280]: W0706 23:28:27.254242 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.254773 kubelet[3280]: E0706 23:28:27.254458 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.256520 kubelet[3280]: E0706 23:28:27.256219 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.256520 kubelet[3280]: W0706 23:28:27.256258 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.258378 kubelet[3280]: E0706 23:28:27.258265 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.258378 kubelet[3280]: W0706 23:28:27.258306 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.261974 kubelet[3280]: E0706 23:28:27.261583 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.262487 kubelet[3280]: E0706 23:28:27.261914 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.262487 kubelet[3280]: W0706 23:28:27.262330 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.262487 kubelet[3280]: E0706 23:28:27.261932 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.262487 kubelet[3280]: E0706 23:28:27.262445 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.266160 kubelet[3280]: E0706 23:28:27.265922 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.266631 kubelet[3280]: W0706 23:28:27.266387 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.267313 kubelet[3280]: E0706 23:28:27.267117 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.271847 kubelet[3280]: E0706 23:28:27.271517 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.271847 kubelet[3280]: W0706 23:28:27.271550 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.274997 kubelet[3280]: E0706 23:28:27.274959 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.275376 kubelet[3280]: W0706 23:28:27.275095 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.276100 kubelet[3280]: E0706 23:28:27.276021 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.276100 kubelet[3280]: E0706 23:28:27.276093 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.276499 kubelet[3280]: E0706 23:28:27.276297 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.276936 kubelet[3280]: W0706 23:28:27.276577 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.276936 kubelet[3280]: E0706 23:28:27.276617 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.280527 kubelet[3280]: E0706 23:28:27.280213 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.280527 kubelet[3280]: W0706 23:28:27.280255 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.280697 kubelet[3280]: E0706 23:28:27.280586 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.280697 kubelet[3280]: W0706 23:28:27.280604 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.281445 kubelet[3280]: E0706 23:28:27.280892 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.281445 kubelet[3280]: W0706 23:28:27.280923 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.281445 kubelet[3280]: E0706 23:28:27.280951 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.281445 kubelet[3280]: E0706 23:28:27.280995 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.283595 kubelet[3280]: E0706 23:28:27.283446 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.283595 kubelet[3280]: W0706 23:28:27.283526 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.283924 kubelet[3280]: E0706 23:28:27.283623 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.283981 kubelet[3280]: E0706 23:28:27.283934 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.291210 kubelet[3280]: E0706 23:28:27.290813 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.291210 kubelet[3280]: W0706 23:28:27.290862 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.291210 kubelet[3280]: E0706 23:28:27.290900 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.300373 systemd[1]: Started cri-containerd-7d2a589b8da3ebdf1e66a965fab2979ba092a1df49e2cd5b85e70ca02dc83b95.scope - libcontainer container 7d2a589b8da3ebdf1e66a965fab2979ba092a1df49e2cd5b85e70ca02dc83b95. Jul 6 23:28:27.328425 containerd[1930]: time="2025-07-06T23:28:27.328272352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-768c9bf68f-jbx42,Uid:b17b5e7b-1db5-4520-860d-9c9cd8d01ecb,Namespace:calico-system,Attempt:0,} returns sandbox id \"d34d44b31069ddd314b9097f5333b70c42cbdac8a62ff6f1fa5421ab3c15cb74\"" Jul 6 23:28:27.333298 containerd[1930]: time="2025-07-06T23:28:27.333231568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:28:27.355980 kubelet[3280]: E0706 23:28:27.355945 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.356257 kubelet[3280]: W0706 23:28:27.356228 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.356632 kubelet[3280]: E0706 23:28:27.356326 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.384215 kubelet[3280]: E0706 23:28:27.384155 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:27.384870 kubelet[3280]: W0706 23:28:27.384738 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:27.384870 kubelet[3280]: E0706 23:28:27.384791 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:27.481312 containerd[1930]: time="2025-07-06T23:28:27.481183673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n6xqb,Uid:61a90870-e4b9-42cd-b617-f81c5f0e84d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d2a589b8da3ebdf1e66a965fab2979ba092a1df49e2cd5b85e70ca02dc83b95\"" Jul 6 23:28:28.305286 kubelet[3280]: E0706 23:28:28.305210 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h7wc5" podUID="c1099aef-6f47-4c54-92f1-abdaae830d6d" Jul 6 23:28:28.702988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3508728277.mount: Deactivated successfully. Jul 6 23:28:29.514018 containerd[1930]: time="2025-07-06T23:28:29.513899407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:29.516675 containerd[1930]: time="2025-07-06T23:28:29.516613783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 6 23:28:29.518667 containerd[1930]: time="2025-07-06T23:28:29.518583859Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:29.523021 containerd[1930]: time="2025-07-06T23:28:29.522938395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:29.524314 containerd[1930]: time="2025-07-06T23:28:29.524115715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.190438107s" Jul 6 23:28:29.524314 containerd[1930]: time="2025-07-06T23:28:29.524176495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 6 23:28:29.527273 containerd[1930]: time="2025-07-06T23:28:29.527205499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:28:29.555790 containerd[1930]: time="2025-07-06T23:28:29.555618079Z" level=info msg="CreateContainer within sandbox \"d34d44b31069ddd314b9097f5333b70c42cbdac8a62ff6f1fa5421ab3c15cb74\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:28:29.576784 containerd[1930]: time="2025-07-06T23:28:29.576733927Z" level=info msg="Container a8a297d965b5f46c8dd45f6421f78faf8a826dd3c55e908c79538a1501a856c6: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:29.585754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount193636945.mount: Deactivated successfully. Jul 6 23:28:29.600715 containerd[1930]: time="2025-07-06T23:28:29.600638023Z" level=info msg="CreateContainer within sandbox \"d34d44b31069ddd314b9097f5333b70c42cbdac8a62ff6f1fa5421ab3c15cb74\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a8a297d965b5f46c8dd45f6421f78faf8a826dd3c55e908c79538a1501a856c6\"" Jul 6 23:28:29.601881 containerd[1930]: time="2025-07-06T23:28:29.601793311Z" level=info msg="StartContainer for \"a8a297d965b5f46c8dd45f6421f78faf8a826dd3c55e908c79538a1501a856c6\"" Jul 6 23:28:29.607003 containerd[1930]: time="2025-07-06T23:28:29.606925795Z" level=info msg="connecting to shim a8a297d965b5f46c8dd45f6421f78faf8a826dd3c55e908c79538a1501a856c6" address="unix:///run/containerd/s/7d22215a8476007be2b9ac199b687d5dd596460b33f43112be01356d8cc56a86" protocol=ttrpc version=3 Jul 6 23:28:29.647524 systemd[1]: Started cri-containerd-a8a297d965b5f46c8dd45f6421f78faf8a826dd3c55e908c79538a1501a856c6.scope - libcontainer container a8a297d965b5f46c8dd45f6421f78faf8a826dd3c55e908c79538a1501a856c6. Jul 6 23:28:29.763198 containerd[1930]: time="2025-07-06T23:28:29.762252200Z" level=info msg="StartContainer for \"a8a297d965b5f46c8dd45f6421f78faf8a826dd3c55e908c79538a1501a856c6\" returns successfully" Jul 6 23:28:30.305352 kubelet[3280]: E0706 23:28:30.305285 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h7wc5" podUID="c1099aef-6f47-4c54-92f1-abdaae830d6d" Jul 6 23:28:30.573533 kubelet[3280]: I0706 23:28:30.572530 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-768c9bf68f-jbx42" podStartSLOduration=2.378321521 podStartE2EDuration="4.572504s" podCreationTimestamp="2025-07-06 23:28:26 +0000 UTC" firstStartedPulling="2025-07-06 23:28:27.33188296 +0000 UTC m=+28.283151093" lastFinishedPulling="2025-07-06 23:28:29.526065427 +0000 UTC m=+30.477333572" observedRunningTime="2025-07-06 23:28:30.570900188 +0000 UTC m=+31.522168321" watchObservedRunningTime="2025-07-06 23:28:30.572504 +0000 UTC m=+31.523772133" Jul 6 23:28:30.643562 kubelet[3280]: E0706 23:28:30.643509 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.643831 kubelet[3280]: W0706 23:28:30.643679 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.643831 kubelet[3280]: E0706 23:28:30.643743 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.644865 kubelet[3280]: E0706 23:28:30.644824 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.644865 kubelet[3280]: W0706 23:28:30.644858 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.645264 kubelet[3280]: E0706 23:28:30.645030 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.646037 kubelet[3280]: E0706 23:28:30.645982 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.646240 kubelet[3280]: W0706 23:28:30.646198 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.646240 kubelet[3280]: E0706 23:28:30.646230 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.647028 kubelet[3280]: E0706 23:28:30.646969 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.647028 kubelet[3280]: W0706 23:28:30.647000 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.647202 kubelet[3280]: E0706 23:28:30.647176 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.648276 kubelet[3280]: E0706 23:28:30.648187 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.648276 kubelet[3280]: W0706 23:28:30.648220 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.648276 kubelet[3280]: E0706 23:28:30.648249 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.648872 kubelet[3280]: E0706 23:28:30.648832 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.648872 kubelet[3280]: W0706 23:28:30.648863 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.649223 kubelet[3280]: E0706 23:28:30.648890 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.649339 kubelet[3280]: E0706 23:28:30.649297 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.649339 kubelet[3280]: W0706 23:28:30.649316 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.649457 kubelet[3280]: E0706 23:28:30.649339 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.650307 kubelet[3280]: E0706 23:28:30.650222 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.650307 kubelet[3280]: W0706 23:28:30.650255 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.650307 kubelet[3280]: E0706 23:28:30.650282 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.650874 kubelet[3280]: E0706 23:28:30.650632 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.650874 kubelet[3280]: W0706 23:28:30.650651 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.650874 kubelet[3280]: E0706 23:28:30.650671 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.651218 kubelet[3280]: E0706 23:28:30.650955 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.651218 kubelet[3280]: W0706 23:28:30.650978 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.651218 kubelet[3280]: E0706 23:28:30.651159 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.651557 kubelet[3280]: E0706 23:28:30.651478 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.651557 kubelet[3280]: W0706 23:28:30.651497 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.651557 kubelet[3280]: E0706 23:28:30.651517 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.651901 kubelet[3280]: E0706 23:28:30.651804 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.651901 kubelet[3280]: W0706 23:28:30.651820 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.651901 kubelet[3280]: E0706 23:28:30.651839 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.652672 kubelet[3280]: E0706 23:28:30.652181 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.652672 kubelet[3280]: W0706 23:28:30.652197 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.652672 kubelet[3280]: E0706 23:28:30.652217 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.652672 kubelet[3280]: E0706 23:28:30.652664 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.652964 kubelet[3280]: W0706 23:28:30.652684 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.652964 kubelet[3280]: E0706 23:28:30.652707 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.653157 kubelet[3280]: E0706 23:28:30.653003 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.653157 kubelet[3280]: W0706 23:28:30.653019 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.653157 kubelet[3280]: E0706 23:28:30.653037 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.689495 kubelet[3280]: E0706 23:28:30.689455 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.689742 kubelet[3280]: W0706 23:28:30.689713 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.690069 kubelet[3280]: E0706 23:28:30.689949 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.690810 kubelet[3280]: E0706 23:28:30.690774 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.691077 kubelet[3280]: W0706 23:28:30.691029 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.691206 kubelet[3280]: E0706 23:28:30.691183 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.691697 kubelet[3280]: E0706 23:28:30.691663 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.691697 kubelet[3280]: W0706 23:28:30.691695 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.691897 kubelet[3280]: E0706 23:28:30.691735 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.692239 kubelet[3280]: E0706 23:28:30.692203 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.692239 kubelet[3280]: W0706 23:28:30.692232 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.692467 kubelet[3280]: E0706 23:28:30.692268 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.693077 kubelet[3280]: E0706 23:28:30.693002 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.693077 kubelet[3280]: W0706 23:28:30.693035 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.693327 kubelet[3280]: E0706 23:28:30.693120 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.693722 kubelet[3280]: E0706 23:28:30.693676 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.694020 kubelet[3280]: W0706 23:28:30.693972 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.694020 kubelet[3280]: E0706 23:28:30.694095 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.694435 kubelet[3280]: E0706 23:28:30.694350 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.694435 kubelet[3280]: W0706 23:28:30.694367 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.694817 kubelet[3280]: E0706 23:28:30.694703 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.694817 kubelet[3280]: E0706 23:28:30.694726 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.694817 kubelet[3280]: W0706 23:28:30.694743 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.695966 kubelet[3280]: E0706 23:28:30.695211 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.695966 kubelet[3280]: E0706 23:28:30.695321 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.695966 kubelet[3280]: W0706 23:28:30.695338 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.695966 kubelet[3280]: E0706 23:28:30.695363 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.695966 kubelet[3280]: E0706 23:28:30.695922 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.695966 kubelet[3280]: W0706 23:28:30.695941 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.695966 kubelet[3280]: E0706 23:28:30.695965 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.696425 kubelet[3280]: E0706 23:28:30.696264 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.696425 kubelet[3280]: W0706 23:28:30.696281 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.696425 kubelet[3280]: E0706 23:28:30.696301 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.697478 kubelet[3280]: E0706 23:28:30.697419 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.697478 kubelet[3280]: W0706 23:28:30.697452 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.697478 kubelet[3280]: E0706 23:28:30.697494 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.699232 kubelet[3280]: E0706 23:28:30.699140 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.699833 kubelet[3280]: W0706 23:28:30.699173 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.699833 kubelet[3280]: E0706 23:28:30.699478 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.700714 kubelet[3280]: E0706 23:28:30.700140 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.700714 kubelet[3280]: W0706 23:28:30.700164 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.700714 kubelet[3280]: E0706 23:28:30.700200 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.701551 kubelet[3280]: E0706 23:28:30.701367 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.701699 kubelet[3280]: W0706 23:28:30.701658 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.702025 kubelet[3280]: E0706 23:28:30.701982 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.702864 kubelet[3280]: E0706 23:28:30.702682 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.702864 kubelet[3280]: W0706 23:28:30.702710 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.702864 kubelet[3280]: E0706 23:28:30.702749 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.703352 kubelet[3280]: E0706 23:28:30.703318 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.703416 kubelet[3280]: W0706 23:28:30.703350 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.703416 kubelet[3280]: E0706 23:28:30.703378 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.704989 kubelet[3280]: E0706 23:28:30.704934 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:30.704989 kubelet[3280]: W0706 23:28:30.704971 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:30.705371 kubelet[3280]: E0706 23:28:30.705003 3280 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:30.752087 containerd[1930]: time="2025-07-06T23:28:30.751535205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:30.754233 containerd[1930]: time="2025-07-06T23:28:30.754189245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 6 23:28:30.756674 containerd[1930]: time="2025-07-06T23:28:30.756555921Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:30.763600 containerd[1930]: time="2025-07-06T23:28:30.763509261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:30.765077 containerd[1930]: time="2025-07-06T23:28:30.764337741Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.237067586s" Jul 6 23:28:30.765077 containerd[1930]: time="2025-07-06T23:28:30.764399565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 6 23:28:30.773279 containerd[1930]: time="2025-07-06T23:28:30.773096349Z" level=info msg="CreateContainer within sandbox \"7d2a589b8da3ebdf1e66a965fab2979ba092a1df49e2cd5b85e70ca02dc83b95\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:28:30.800073 containerd[1930]: time="2025-07-06T23:28:30.799207521Z" level=info msg="Container 1d74b47dac13077a0e624ed05d02f2ab1dda2fa00cc06a68ede8e585f8d22b1d: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:30.822218 containerd[1930]: time="2025-07-06T23:28:30.822142569Z" level=info msg="CreateContainer within sandbox \"7d2a589b8da3ebdf1e66a965fab2979ba092a1df49e2cd5b85e70ca02dc83b95\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1d74b47dac13077a0e624ed05d02f2ab1dda2fa00cc06a68ede8e585f8d22b1d\"" Jul 6 23:28:30.824463 containerd[1930]: time="2025-07-06T23:28:30.824280105Z" level=info msg="StartContainer for \"1d74b47dac13077a0e624ed05d02f2ab1dda2fa00cc06a68ede8e585f8d22b1d\"" Jul 6 23:28:30.832374 containerd[1930]: time="2025-07-06T23:28:30.831857505Z" level=info msg="connecting to shim 1d74b47dac13077a0e624ed05d02f2ab1dda2fa00cc06a68ede8e585f8d22b1d" address="unix:///run/containerd/s/7e9b817bce14b31fb32b3f64c0fdacec88c471f4d4113c22babeaa2449129297" protocol=ttrpc version=3 Jul 6 23:28:30.881379 systemd[1]: Started cri-containerd-1d74b47dac13077a0e624ed05d02f2ab1dda2fa00cc06a68ede8e585f8d22b1d.scope - libcontainer container 1d74b47dac13077a0e624ed05d02f2ab1dda2fa00cc06a68ede8e585f8d22b1d. Jul 6 23:28:30.964089 containerd[1930]: time="2025-07-06T23:28:30.963985066Z" level=info msg="StartContainer for \"1d74b47dac13077a0e624ed05d02f2ab1dda2fa00cc06a68ede8e585f8d22b1d\" returns successfully" Jul 6 23:28:30.999298 systemd[1]: cri-containerd-1d74b47dac13077a0e624ed05d02f2ab1dda2fa00cc06a68ede8e585f8d22b1d.scope: Deactivated successfully. Jul 6 23:28:31.006462 containerd[1930]: time="2025-07-06T23:28:31.006343014Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d74b47dac13077a0e624ed05d02f2ab1dda2fa00cc06a68ede8e585f8d22b1d\" id:\"1d74b47dac13077a0e624ed05d02f2ab1dda2fa00cc06a68ede8e585f8d22b1d\" pid:4140 exited_at:{seconds:1751844511 nanos:4003410}" Jul 6 23:28:31.006629 containerd[1930]: time="2025-07-06T23:28:31.006403938Z" level=info msg="received exit event container_id:\"1d74b47dac13077a0e624ed05d02f2ab1dda2fa00cc06a68ede8e585f8d22b1d\" id:\"1d74b47dac13077a0e624ed05d02f2ab1dda2fa00cc06a68ede8e585f8d22b1d\" pid:4140 exited_at:{seconds:1751844511 nanos:4003410}" Jul 6 23:28:31.050601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d74b47dac13077a0e624ed05d02f2ab1dda2fa00cc06a68ede8e585f8d22b1d-rootfs.mount: Deactivated successfully. Jul 6 23:28:31.565958 containerd[1930]: time="2025-07-06T23:28:31.565818489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:28:32.305906 kubelet[3280]: E0706 23:28:32.305821 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h7wc5" podUID="c1099aef-6f47-4c54-92f1-abdaae830d6d" Jul 6 23:28:34.306192 kubelet[3280]: E0706 23:28:34.305259 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h7wc5" podUID="c1099aef-6f47-4c54-92f1-abdaae830d6d" Jul 6 23:28:34.410671 containerd[1930]: time="2025-07-06T23:28:34.410594819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:34.413381 containerd[1930]: time="2025-07-06T23:28:34.413302499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 6 23:28:34.415846 containerd[1930]: time="2025-07-06T23:28:34.415667327Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:34.425568 containerd[1930]: time="2025-07-06T23:28:34.425493347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:34.429678 containerd[1930]: time="2025-07-06T23:28:34.429488231Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.86292465s" Jul 6 23:28:34.429678 containerd[1930]: time="2025-07-06T23:28:34.429552575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 6 23:28:34.437073 containerd[1930]: time="2025-07-06T23:28:34.436730327Z" level=info msg="CreateContainer within sandbox \"7d2a589b8da3ebdf1e66a965fab2979ba092a1df49e2cd5b85e70ca02dc83b95\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:28:34.457686 containerd[1930]: time="2025-07-06T23:28:34.456319007Z" level=info msg="Container 3ec67654fb5da55e06c408b434e0584fef04ad7e6c196270e613c89c10ae11ff: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:34.475418 containerd[1930]: time="2025-07-06T23:28:34.475363668Z" level=info msg="CreateContainer within sandbox \"7d2a589b8da3ebdf1e66a965fab2979ba092a1df49e2cd5b85e70ca02dc83b95\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3ec67654fb5da55e06c408b434e0584fef04ad7e6c196270e613c89c10ae11ff\"" Jul 6 23:28:34.476736 containerd[1930]: time="2025-07-06T23:28:34.476686884Z" level=info msg="StartContainer for \"3ec67654fb5da55e06c408b434e0584fef04ad7e6c196270e613c89c10ae11ff\"" Jul 6 23:28:34.479992 containerd[1930]: time="2025-07-06T23:28:34.479819712Z" level=info msg="connecting to shim 3ec67654fb5da55e06c408b434e0584fef04ad7e6c196270e613c89c10ae11ff" address="unix:///run/containerd/s/7e9b817bce14b31fb32b3f64c0fdacec88c471f4d4113c22babeaa2449129297" protocol=ttrpc version=3 Jul 6 23:28:34.520329 systemd[1]: Started cri-containerd-3ec67654fb5da55e06c408b434e0584fef04ad7e6c196270e613c89c10ae11ff.scope - libcontainer container 3ec67654fb5da55e06c408b434e0584fef04ad7e6c196270e613c89c10ae11ff. Jul 6 23:28:34.620486 containerd[1930]: time="2025-07-06T23:28:34.620397384Z" level=info msg="StartContainer for \"3ec67654fb5da55e06c408b434e0584fef04ad7e6c196270e613c89c10ae11ff\" returns successfully" Jul 6 23:28:35.541349 systemd[1]: cri-containerd-3ec67654fb5da55e06c408b434e0584fef04ad7e6c196270e613c89c10ae11ff.scope: Deactivated successfully. Jul 6 23:28:35.542733 systemd[1]: cri-containerd-3ec67654fb5da55e06c408b434e0584fef04ad7e6c196270e613c89c10ae11ff.scope: Consumed 935ms CPU time, 186.7M memory peak, 165.8M written to disk. Jul 6 23:28:35.544738 containerd[1930]: time="2025-07-06T23:28:35.544371457Z" level=info msg="received exit event container_id:\"3ec67654fb5da55e06c408b434e0584fef04ad7e6c196270e613c89c10ae11ff\" id:\"3ec67654fb5da55e06c408b434e0584fef04ad7e6c196270e613c89c10ae11ff\" pid:4199 exited_at:{seconds:1751844515 nanos:543787309}" Jul 6 23:28:35.545893 containerd[1930]: time="2025-07-06T23:28:35.544605469Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ec67654fb5da55e06c408b434e0584fef04ad7e6c196270e613c89c10ae11ff\" id:\"3ec67654fb5da55e06c408b434e0584fef04ad7e6c196270e613c89c10ae11ff\" pid:4199 exited_at:{seconds:1751844515 nanos:543787309}" Jul 6 23:28:35.589958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ec67654fb5da55e06c408b434e0584fef04ad7e6c196270e613c89c10ae11ff-rootfs.mount: Deactivated successfully. Jul 6 23:28:35.612239 kubelet[3280]: I0706 23:28:35.611919 3280 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:28:35.702453 systemd[1]: Created slice kubepods-burstable-podeb5c3d94_8803_44a1_af17_51b481acd517.slice - libcontainer container kubepods-burstable-podeb5c3d94_8803_44a1_af17_51b481acd517.slice. Jul 6 23:28:35.729230 systemd[1]: Created slice kubepods-burstable-podd0e807e9_0b9a_42a8_89c1_7d48e397dd4f.slice - libcontainer container kubepods-burstable-podd0e807e9_0b9a_42a8_89c1_7d48e397dd4f.slice. Jul 6 23:28:35.736383 kubelet[3280]: I0706 23:28:35.734650 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwd7h\" (UniqueName: \"kubernetes.io/projected/eb5c3d94-8803-44a1-af17-51b481acd517-kube-api-access-pwd7h\") pod \"coredns-7c65d6cfc9-w7q6j\" (UID: \"eb5c3d94-8803-44a1-af17-51b481acd517\") " pod="kube-system/coredns-7c65d6cfc9-w7q6j" Jul 6 23:28:35.736383 kubelet[3280]: I0706 23:28:35.734721 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb5c3d94-8803-44a1-af17-51b481acd517-config-volume\") pod \"coredns-7c65d6cfc9-w7q6j\" (UID: \"eb5c3d94-8803-44a1-af17-51b481acd517\") " pod="kube-system/coredns-7c65d6cfc9-w7q6j" Jul 6 23:28:35.736383 kubelet[3280]: I0706 23:28:35.734762 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0e807e9-0b9a-42a8-89c1-7d48e397dd4f-config-volume\") pod \"coredns-7c65d6cfc9-hgfgz\" (UID: \"d0e807e9-0b9a-42a8-89c1-7d48e397dd4f\") " pod="kube-system/coredns-7c65d6cfc9-hgfgz" Jul 6 23:28:35.736383 kubelet[3280]: I0706 23:28:35.734797 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4xsv\" (UniqueName: \"kubernetes.io/projected/d0e807e9-0b9a-42a8-89c1-7d48e397dd4f-kube-api-access-j4xsv\") pod \"coredns-7c65d6cfc9-hgfgz\" (UID: \"d0e807e9-0b9a-42a8-89c1-7d48e397dd4f\") " pod="kube-system/coredns-7c65d6cfc9-hgfgz" Jul 6 23:28:35.759206 kubelet[3280]: W0706 23:28:35.759160 3280 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-26-116" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-26-116' and this object Jul 6 23:28:35.761216 kubelet[3280]: E0706 23:28:35.761007 3280 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-26-116\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ip-172-31-26-116' and this object" logger="UnhandledError" Jul 6 23:28:35.761216 kubelet[3280]: W0706 23:28:35.759868 3280 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ip-172-31-26-116" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-26-116' and this object Jul 6 23:28:35.761216 kubelet[3280]: E0706 23:28:35.761113 3280 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ip-172-31-26-116\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-26-116' and this object" logger="UnhandledError" Jul 6 23:28:35.761216 kubelet[3280]: W0706 23:28:35.759957 3280 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ip-172-31-26-116" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-26-116' and this object Jul 6 23:28:35.801402 kubelet[3280]: E0706 23:28:35.763143 3280 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ip-172-31-26-116\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-26-116' and this object" logger="UnhandledError" Jul 6 23:28:35.801402 kubelet[3280]: W0706 23:28:35.763295 3280 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ip-172-31-26-116" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-26-116' and this object Jul 6 23:28:35.801402 kubelet[3280]: E0706 23:28:35.763329 3280 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ip-172-31-26-116\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-26-116' and this object" logger="UnhandledError" Jul 6 23:28:35.801402 kubelet[3280]: W0706 23:28:35.760019 3280 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ip-172-31-26-116" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-26-116' and this object Jul 6 23:28:35.801402 kubelet[3280]: E0706 23:28:35.763376 3280 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ip-172-31-26-116\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-26-116' and this object" logger="UnhandledError" Jul 6 23:28:35.765674 systemd[1]: Created slice kubepods-besteffort-podfc56821c_5139_4749_84ae_639dc6e9b07f.slice - libcontainer container kubepods-besteffort-podfc56821c_5139_4749_84ae_639dc6e9b07f.slice. Jul 6 23:28:35.801897 kubelet[3280]: W0706 23:28:35.760111 3280 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ip-172-31-26-116" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-26-116' and this object Jul 6 23:28:35.801897 kubelet[3280]: E0706 23:28:35.763443 3280 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ip-172-31-26-116\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-26-116' and this object" logger="UnhandledError" Jul 6 23:28:35.801897 kubelet[3280]: W0706 23:28:35.766160 3280 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-26-116" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-26-116' and this object Jul 6 23:28:35.801897 kubelet[3280]: E0706 23:28:35.766213 3280 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ip-172-31-26-116\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ip-172-31-26-116' and this object" logger="UnhandledError" Jul 6 23:28:35.792099 systemd[1]: Created slice kubepods-besteffort-podecbe8d15_6e49_4751_89fa_dcb7a48b9dab.slice - libcontainer container kubepods-besteffort-podecbe8d15_6e49_4751_89fa_dcb7a48b9dab.slice. Jul 6 23:28:35.818143 systemd[1]: Created slice kubepods-besteffort-podc3904aef_10f2_4f16_bf54_789fb0de513d.slice - libcontainer container kubepods-besteffort-podc3904aef_10f2_4f16_bf54_789fb0de513d.slice. Jul 6 23:28:35.835961 kubelet[3280]: I0706 23:28:35.835869 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc56821c-5139-4749-84ae-639dc6e9b07f-tigera-ca-bundle\") pod \"calico-kube-controllers-64bbcddc7c-s49wp\" (UID: \"fc56821c-5139-4749-84ae-639dc6e9b07f\") " pod="calico-system/calico-kube-controllers-64bbcddc7c-s49wp" Jul 6 23:28:35.836114 kubelet[3280]: I0706 23:28:35.836019 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1d85a67a-24d6-4a23-b48c-0fe13a5fb096-goldmane-key-pair\") pod \"goldmane-58fd7646b9-bsbzl\" (UID: \"1d85a67a-24d6-4a23-b48c-0fe13a5fb096\") " pod="calico-system/goldmane-58fd7646b9-bsbzl" Jul 6 23:28:35.836202 kubelet[3280]: I0706 23:28:35.836110 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f8qs\" (UniqueName: \"kubernetes.io/projected/1d85a67a-24d6-4a23-b48c-0fe13a5fb096-kube-api-access-4f8qs\") pod \"goldmane-58fd7646b9-bsbzl\" (UID: \"1d85a67a-24d6-4a23-b48c-0fe13a5fb096\") " pod="calico-system/goldmane-58fd7646b9-bsbzl" Jul 6 23:28:35.836202 kubelet[3280]: I0706 23:28:35.836155 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nm2n\" (UniqueName: \"kubernetes.io/projected/ecbe8d15-6e49-4751-89fa-dcb7a48b9dab-kube-api-access-7nm2n\") pod \"calico-apiserver-7799487779-zg2ph\" (UID: \"ecbe8d15-6e49-4751-89fa-dcb7a48b9dab\") " pod="calico-apiserver/calico-apiserver-7799487779-zg2ph" Jul 6 23:28:35.836202 kubelet[3280]: I0706 23:28:35.836195 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6gj8\" (UniqueName: \"kubernetes.io/projected/c3904aef-10f2-4f16-bf54-789fb0de513d-kube-api-access-w6gj8\") pod \"calico-apiserver-7799487779-lslkv\" (UID: \"c3904aef-10f2-4f16-bf54-789fb0de513d\") " pod="calico-apiserver/calico-apiserver-7799487779-lslkv" Jul 6 23:28:35.836376 kubelet[3280]: I0706 23:28:35.836230 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d85a67a-24d6-4a23-b48c-0fe13a5fb096-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-bsbzl\" (UID: \"1d85a67a-24d6-4a23-b48c-0fe13a5fb096\") " pod="calico-system/goldmane-58fd7646b9-bsbzl" Jul 6 23:28:35.836376 kubelet[3280]: I0706 23:28:35.836271 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fd15636e-8113-4840-a994-881ea05acf18-whisker-backend-key-pair\") pod \"whisker-66fcbbbf57-m5mwv\" (UID: \"fd15636e-8113-4840-a994-881ea05acf18\") " pod="calico-system/whisker-66fcbbbf57-m5mwv" Jul 6 23:28:35.836376 kubelet[3280]: I0706 23:28:35.836316 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd15636e-8113-4840-a994-881ea05acf18-whisker-ca-bundle\") pod \"whisker-66fcbbbf57-m5mwv\" (UID: \"fd15636e-8113-4840-a994-881ea05acf18\") " pod="calico-system/whisker-66fcbbbf57-m5mwv" Jul 6 23:28:35.836376 kubelet[3280]: I0706 23:28:35.836361 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c3904aef-10f2-4f16-bf54-789fb0de513d-calico-apiserver-certs\") pod \"calico-apiserver-7799487779-lslkv\" (UID: \"c3904aef-10f2-4f16-bf54-789fb0de513d\") " pod="calico-apiserver/calico-apiserver-7799487779-lslkv" Jul 6 23:28:35.836577 kubelet[3280]: I0706 23:28:35.836403 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gms8j\" (UniqueName: \"kubernetes.io/projected/fd15636e-8113-4840-a994-881ea05acf18-kube-api-access-gms8j\") pod \"whisker-66fcbbbf57-m5mwv\" (UID: \"fd15636e-8113-4840-a994-881ea05acf18\") " pod="calico-system/whisker-66fcbbbf57-m5mwv" Jul 6 23:28:35.836577 kubelet[3280]: I0706 23:28:35.836455 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tg59\" (UniqueName: \"kubernetes.io/projected/fc56821c-5139-4749-84ae-639dc6e9b07f-kube-api-access-2tg59\") pod \"calico-kube-controllers-64bbcddc7c-s49wp\" (UID: \"fc56821c-5139-4749-84ae-639dc6e9b07f\") " pod="calico-system/calico-kube-controllers-64bbcddc7c-s49wp" Jul 6 23:28:35.836577 kubelet[3280]: I0706 23:28:35.836494 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d85a67a-24d6-4a23-b48c-0fe13a5fb096-config\") pod \"goldmane-58fd7646b9-bsbzl\" (UID: \"1d85a67a-24d6-4a23-b48c-0fe13a5fb096\") " pod="calico-system/goldmane-58fd7646b9-bsbzl" Jul 6 23:28:35.836577 kubelet[3280]: I0706 23:28:35.836555 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ecbe8d15-6e49-4751-89fa-dcb7a48b9dab-calico-apiserver-certs\") pod \"calico-apiserver-7799487779-zg2ph\" (UID: \"ecbe8d15-6e49-4751-89fa-dcb7a48b9dab\") " pod="calico-apiserver/calico-apiserver-7799487779-zg2ph" Jul 6 23:28:35.843379 systemd[1]: Created slice kubepods-besteffort-pod1d85a67a_24d6_4a23_b48c_0fe13a5fb096.slice - libcontainer container kubepods-besteffort-pod1d85a67a_24d6_4a23_b48c_0fe13a5fb096.slice. Jul 6 23:28:35.863097 systemd[1]: Created slice kubepods-besteffort-podfd15636e_8113_4840_a994_881ea05acf18.slice - libcontainer container kubepods-besteffort-podfd15636e_8113_4840_a994_881ea05acf18.slice. Jul 6 23:28:36.016528 containerd[1930]: time="2025-07-06T23:28:36.016151843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w7q6j,Uid:eb5c3d94-8803-44a1-af17-51b481acd517,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:36.045107 containerd[1930]: time="2025-07-06T23:28:36.045022187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hgfgz,Uid:d0e807e9-0b9a-42a8-89c1-7d48e397dd4f,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:36.078586 containerd[1930]: time="2025-07-06T23:28:36.078519371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64bbcddc7c-s49wp,Uid:fc56821c-5139-4749-84ae-639dc6e9b07f,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:36.325788 systemd[1]: Created slice kubepods-besteffort-podc1099aef_6f47_4c54_92f1_abdaae830d6d.slice - libcontainer container kubepods-besteffort-podc1099aef_6f47_4c54_92f1_abdaae830d6d.slice. Jul 6 23:28:36.346427 containerd[1930]: time="2025-07-06T23:28:36.343683493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7wc5,Uid:c1099aef-6f47-4c54-92f1-abdaae830d6d,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:36.455331 containerd[1930]: time="2025-07-06T23:28:36.455249425Z" level=error msg="Failed to destroy network for sandbox \"6923653fe01d5b4608febf5aebbe7d89eec130ce096b663dee18ce17237fe564\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:36.458814 containerd[1930]: time="2025-07-06T23:28:36.458728873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hgfgz,Uid:d0e807e9-0b9a-42a8-89c1-7d48e397dd4f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6923653fe01d5b4608febf5aebbe7d89eec130ce096b663dee18ce17237fe564\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:36.461193 kubelet[3280]: E0706 23:28:36.461119 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6923653fe01d5b4608febf5aebbe7d89eec130ce096b663dee18ce17237fe564\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:36.461454 kubelet[3280]: E0706 23:28:36.461419 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6923653fe01d5b4608febf5aebbe7d89eec130ce096b663dee18ce17237fe564\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hgfgz" Jul 6 23:28:36.461576 kubelet[3280]: E0706 23:28:36.461547 3280 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6923653fe01d5b4608febf5aebbe7d89eec130ce096b663dee18ce17237fe564\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hgfgz" Jul 6 23:28:36.462262 kubelet[3280]: E0706 23:28:36.461755 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hgfgz_kube-system(d0e807e9-0b9a-42a8-89c1-7d48e397dd4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hgfgz_kube-system(d0e807e9-0b9a-42a8-89c1-7d48e397dd4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6923653fe01d5b4608febf5aebbe7d89eec130ce096b663dee18ce17237fe564\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hgfgz" podUID="d0e807e9-0b9a-42a8-89c1-7d48e397dd4f" Jul 6 23:28:36.504331 containerd[1930]: time="2025-07-06T23:28:36.504264254Z" level=error msg="Failed to destroy network for sandbox \"21b8178dc99b54d0d289d6d05a7301878fb17ab8707323bae31e5bdc5cef6ebb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:36.507571 containerd[1930]: time="2025-07-06T23:28:36.507487610Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64bbcddc7c-s49wp,Uid:fc56821c-5139-4749-84ae-639dc6e9b07f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21b8178dc99b54d0d289d6d05a7301878fb17ab8707323bae31e5bdc5cef6ebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:36.507950 kubelet[3280]: E0706 23:28:36.507889 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21b8178dc99b54d0d289d6d05a7301878fb17ab8707323bae31e5bdc5cef6ebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:36.508103 kubelet[3280]: E0706 23:28:36.507972 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21b8178dc99b54d0d289d6d05a7301878fb17ab8707323bae31e5bdc5cef6ebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64bbcddc7c-s49wp" Jul 6 23:28:36.508103 kubelet[3280]: E0706 23:28:36.508006 3280 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21b8178dc99b54d0d289d6d05a7301878fb17ab8707323bae31e5bdc5cef6ebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64bbcddc7c-s49wp" Jul 6 23:28:36.508559 kubelet[3280]: E0706 23:28:36.508223 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64bbcddc7c-s49wp_calico-system(fc56821c-5139-4749-84ae-639dc6e9b07f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64bbcddc7c-s49wp_calico-system(fc56821c-5139-4749-84ae-639dc6e9b07f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21b8178dc99b54d0d289d6d05a7301878fb17ab8707323bae31e5bdc5cef6ebb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64bbcddc7c-s49wp" podUID="fc56821c-5139-4749-84ae-639dc6e9b07f" Jul 6 23:28:36.512205 containerd[1930]: time="2025-07-06T23:28:36.512124122Z" level=error msg="Failed to destroy network for sandbox \"ec7ebacfd34465f3525dd630b5e585bcd08a383998e0f8d3a40732baa37f7d48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:36.515492 containerd[1930]: time="2025-07-06T23:28:36.515406194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w7q6j,Uid:eb5c3d94-8803-44a1-af17-51b481acd517,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec7ebacfd34465f3525dd630b5e585bcd08a383998e0f8d3a40732baa37f7d48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:36.516155 kubelet[3280]: E0706 23:28:36.516092 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec7ebacfd34465f3525dd630b5e585bcd08a383998e0f8d3a40732baa37f7d48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:36.516281 kubelet[3280]: E0706 23:28:36.516173 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec7ebacfd34465f3525dd630b5e585bcd08a383998e0f8d3a40732baa37f7d48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-w7q6j" Jul 6 23:28:36.516281 kubelet[3280]: E0706 23:28:36.516215 3280 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec7ebacfd34465f3525dd630b5e585bcd08a383998e0f8d3a40732baa37f7d48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-w7q6j" Jul 6 23:28:36.516394 kubelet[3280]: E0706 23:28:36.516272 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-w7q6j_kube-system(eb5c3d94-8803-44a1-af17-51b481acd517)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-w7q6j_kube-system(eb5c3d94-8803-44a1-af17-51b481acd517)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec7ebacfd34465f3525dd630b5e585bcd08a383998e0f8d3a40732baa37f7d48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-w7q6j" podUID="eb5c3d94-8803-44a1-af17-51b481acd517" Jul 6 23:28:36.532212 containerd[1930]: time="2025-07-06T23:28:36.532064342Z" level=error msg="Failed to destroy network for sandbox \"f806b50a17cc0b74ec0ebf39d3faea766b3d400d5498593bf91cd9ae4ad313fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:36.535001 containerd[1930]: time="2025-07-06T23:28:36.534852266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7wc5,Uid:c1099aef-6f47-4c54-92f1-abdaae830d6d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f806b50a17cc0b74ec0ebf39d3faea766b3d400d5498593bf91cd9ae4ad313fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:36.535492 kubelet[3280]: E0706 23:28:36.535446 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f806b50a17cc0b74ec0ebf39d3faea766b3d400d5498593bf91cd9ae4ad313fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:36.536190 kubelet[3280]: E0706 23:28:36.535617 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f806b50a17cc0b74ec0ebf39d3faea766b3d400d5498593bf91cd9ae4ad313fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h7wc5" Jul 6 23:28:36.536190 kubelet[3280]: E0706 23:28:36.535653 3280 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f806b50a17cc0b74ec0ebf39d3faea766b3d400d5498593bf91cd9ae4ad313fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h7wc5" Jul 6 23:28:36.536190 kubelet[3280]: E0706 23:28:36.535721 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h7wc5_calico-system(c1099aef-6f47-4c54-92f1-abdaae830d6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h7wc5_calico-system(c1099aef-6f47-4c54-92f1-abdaae830d6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f806b50a17cc0b74ec0ebf39d3faea766b3d400d5498593bf91cd9ae4ad313fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h7wc5" podUID="c1099aef-6f47-4c54-92f1-abdaae830d6d" Jul 6 23:28:36.613119 containerd[1930]: time="2025-07-06T23:28:36.612646034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:28:36.937591 kubelet[3280]: E0706 23:28:36.937338 3280 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 6 23:28:36.937591 kubelet[3280]: E0706 23:28:36.937467 3280 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d85a67a-24d6-4a23-b48c-0fe13a5fb096-goldmane-ca-bundle podName:1d85a67a-24d6-4a23-b48c-0fe13a5fb096 nodeName:}" failed. No retries permitted until 2025-07-06 23:28:37.437436224 +0000 UTC m=+38.388704357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/1d85a67a-24d6-4a23-b48c-0fe13a5fb096-goldmane-ca-bundle") pod "goldmane-58fd7646b9-bsbzl" (UID: "1d85a67a-24d6-4a23-b48c-0fe13a5fb096") : failed to sync configmap cache: timed out waiting for the condition Jul 6 23:28:36.939705 kubelet[3280]: E0706 23:28:36.939652 3280 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 6 23:28:36.939941 kubelet[3280]: E0706 23:28:36.939781 3280 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd15636e-8113-4840-a994-881ea05acf18-whisker-backend-key-pair podName:fd15636e-8113-4840-a994-881ea05acf18 nodeName:}" failed. No retries permitted until 2025-07-06 23:28:37.43975484 +0000 UTC m=+38.391022973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/fd15636e-8113-4840-a994-881ea05acf18-whisker-backend-key-pair") pod "whisker-66fcbbbf57-m5mwv" (UID: "fd15636e-8113-4840-a994-881ea05acf18") : failed to sync secret cache: timed out waiting for the condition Jul 6 23:28:36.939941 kubelet[3280]: E0706 23:28:36.939652 3280 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Jul 6 23:28:36.939941 kubelet[3280]: E0706 23:28:36.939856 3280 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d85a67a-24d6-4a23-b48c-0fe13a5fb096-config podName:1d85a67a-24d6-4a23-b48c-0fe13a5fb096 nodeName:}" failed. No retries permitted until 2025-07-06 23:28:37.439840688 +0000 UTC m=+38.391108821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1d85a67a-24d6-4a23-b48c-0fe13a5fb096-config") pod "goldmane-58fd7646b9-bsbzl" (UID: "1d85a67a-24d6-4a23-b48c-0fe13a5fb096") : failed to sync configmap cache: timed out waiting for the condition Jul 6 23:28:37.010275 containerd[1930]: time="2025-07-06T23:28:37.009443196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7799487779-zg2ph,Uid:ecbe8d15-6e49-4751-89fa-dcb7a48b9dab,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:28:37.031965 containerd[1930]: time="2025-07-06T23:28:37.031843248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7799487779-lslkv,Uid:c3904aef-10f2-4f16-bf54-789fb0de513d,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:28:37.145379 containerd[1930]: time="2025-07-06T23:28:37.145274113Z" level=error msg="Failed to destroy network for sandbox \"d0951623dc7ae3fbeb1086b22494db43ea19d02ca1337ee243dae974d141a3b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:37.148370 containerd[1930]: time="2025-07-06T23:28:37.148298749Z" level=error msg="Failed to destroy network for sandbox \"82244fb38f51359250bdc00bace5e98fe5cfac5577c1017debf9f1a3625f7f12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:37.148873 containerd[1930]: time="2025-07-06T23:28:37.148817713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7799487779-zg2ph,Uid:ecbe8d15-6e49-4751-89fa-dcb7a48b9dab,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0951623dc7ae3fbeb1086b22494db43ea19d02ca1337ee243dae974d141a3b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:37.149437 kubelet[3280]: E0706 23:28:37.149329 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0951623dc7ae3fbeb1086b22494db43ea19d02ca1337ee243dae974d141a3b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:37.149644 kubelet[3280]: E0706 23:28:37.149554 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0951623dc7ae3fbeb1086b22494db43ea19d02ca1337ee243dae974d141a3b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7799487779-zg2ph" Jul 6 23:28:37.149787 kubelet[3280]: E0706 23:28:37.149594 3280 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0951623dc7ae3fbeb1086b22494db43ea19d02ca1337ee243dae974d141a3b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7799487779-zg2ph" Jul 6 23:28:37.150117 kubelet[3280]: E0706 23:28:37.149992 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7799487779-zg2ph_calico-apiserver(ecbe8d15-6e49-4751-89fa-dcb7a48b9dab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7799487779-zg2ph_calico-apiserver(ecbe8d15-6e49-4751-89fa-dcb7a48b9dab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0951623dc7ae3fbeb1086b22494db43ea19d02ca1337ee243dae974d141a3b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7799487779-zg2ph" podUID="ecbe8d15-6e49-4751-89fa-dcb7a48b9dab" Jul 6 23:28:37.151885 containerd[1930]: time="2025-07-06T23:28:37.151757077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7799487779-lslkv,Uid:c3904aef-10f2-4f16-bf54-789fb0de513d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82244fb38f51359250bdc00bace5e98fe5cfac5577c1017debf9f1a3625f7f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:37.152349 kubelet[3280]: E0706 23:28:37.152277 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82244fb38f51359250bdc00bace5e98fe5cfac5577c1017debf9f1a3625f7f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:37.152447 kubelet[3280]: E0706 23:28:37.152363 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82244fb38f51359250bdc00bace5e98fe5cfac5577c1017debf9f1a3625f7f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7799487779-lslkv" Jul 6 23:28:37.152447 kubelet[3280]: E0706 23:28:37.152407 3280 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82244fb38f51359250bdc00bace5e98fe5cfac5577c1017debf9f1a3625f7f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7799487779-lslkv" Jul 6 23:28:37.152571 kubelet[3280]: E0706 23:28:37.152472 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7799487779-lslkv_calico-apiserver(c3904aef-10f2-4f16-bf54-789fb0de513d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7799487779-lslkv_calico-apiserver(c3904aef-10f2-4f16-bf54-789fb0de513d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82244fb38f51359250bdc00bace5e98fe5cfac5577c1017debf9f1a3625f7f12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7799487779-lslkv" podUID="c3904aef-10f2-4f16-bf54-789fb0de513d" Jul 6 23:28:37.591873 systemd[1]: run-netns-cni\x2d6b520204\x2d89da\x2d472f\x2d8228\x2d3bdbe5df37c1.mount: Deactivated successfully. Jul 6 23:28:37.658282 containerd[1930]: time="2025-07-06T23:28:37.658171155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-bsbzl,Uid:1d85a67a-24d6-4a23-b48c-0fe13a5fb096,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:37.673093 containerd[1930]: time="2025-07-06T23:28:37.672983787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66fcbbbf57-m5mwv,Uid:fd15636e-8113-4840-a994-881ea05acf18,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:37.849023 containerd[1930]: time="2025-07-06T23:28:37.848755324Z" level=error msg="Failed to destroy network for sandbox \"8c9f5a080ef9fdbffbf7f149d4332a3846f58cecc88d8b90d47be6a1022e759f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:37.853738 containerd[1930]: time="2025-07-06T23:28:37.853538776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66fcbbbf57-m5mwv,Uid:fd15636e-8113-4840-a994-881ea05acf18,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9f5a080ef9fdbffbf7f149d4332a3846f58cecc88d8b90d47be6a1022e759f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:37.856716 kubelet[3280]: E0706 23:28:37.856213 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9f5a080ef9fdbffbf7f149d4332a3846f58cecc88d8b90d47be6a1022e759f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:37.857330 kubelet[3280]: E0706 23:28:37.856957 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9f5a080ef9fdbffbf7f149d4332a3846f58cecc88d8b90d47be6a1022e759f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66fcbbbf57-m5mwv" Jul 6 23:28:37.857330 kubelet[3280]: E0706 23:28:37.857003 3280 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9f5a080ef9fdbffbf7f149d4332a3846f58cecc88d8b90d47be6a1022e759f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66fcbbbf57-m5mwv" Jul 6 23:28:37.857639 kubelet[3280]: E0706 23:28:37.857527 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66fcbbbf57-m5mwv_calico-system(fd15636e-8113-4840-a994-881ea05acf18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66fcbbbf57-m5mwv_calico-system(fd15636e-8113-4840-a994-881ea05acf18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c9f5a080ef9fdbffbf7f149d4332a3846f58cecc88d8b90d47be6a1022e759f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66fcbbbf57-m5mwv" podUID="fd15636e-8113-4840-a994-881ea05acf18" Jul 6 23:28:37.860933 systemd[1]: run-netns-cni\x2dbae480ed\x2d51c6\x2df6ed\x2d7ad3\x2d85e918cbff4e.mount: Deactivated successfully. Jul 6 23:28:37.875065 containerd[1930]: time="2025-07-06T23:28:37.874806412Z" level=error msg="Failed to destroy network for sandbox \"0a9115a59592a4804920b9925ad04f8c680390c4f30649fd3ebe675df986b838\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:37.882861 containerd[1930]: time="2025-07-06T23:28:37.882748960Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-bsbzl,Uid:1d85a67a-24d6-4a23-b48c-0fe13a5fb096,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a9115a59592a4804920b9925ad04f8c680390c4f30649fd3ebe675df986b838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:37.884495 kubelet[3280]: E0706 23:28:37.884416 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a9115a59592a4804920b9925ad04f8c680390c4f30649fd3ebe675df986b838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:37.885058 kubelet[3280]: E0706 23:28:37.884721 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a9115a59592a4804920b9925ad04f8c680390c4f30649fd3ebe675df986b838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-bsbzl" Jul 6 23:28:37.885058 kubelet[3280]: E0706 23:28:37.884983 3280 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a9115a59592a4804920b9925ad04f8c680390c4f30649fd3ebe675df986b838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-bsbzl" Jul 6 23:28:37.885527 kubelet[3280]: E0706 23:28:37.885220 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-bsbzl_calico-system(1d85a67a-24d6-4a23-b48c-0fe13a5fb096)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-bsbzl_calico-system(1d85a67a-24d6-4a23-b48c-0fe13a5fb096)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a9115a59592a4804920b9925ad04f8c680390c4f30649fd3ebe675df986b838\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-bsbzl" podUID="1d85a67a-24d6-4a23-b48c-0fe13a5fb096" Jul 6 23:28:37.890347 systemd[1]: run-netns-cni\x2de027bed2\x2dc8b9\x2d1f20\x2de9a2\x2d5428d9ef9ee8.mount: Deactivated successfully. Jul 6 23:28:42.833028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732845427.mount: Deactivated successfully. Jul 6 23:28:42.909367 containerd[1930]: time="2025-07-06T23:28:42.909277185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:42.912097 containerd[1930]: time="2025-07-06T23:28:42.911997117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 6 23:28:42.915408 containerd[1930]: time="2025-07-06T23:28:42.915325737Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:42.923708 containerd[1930]: time="2025-07-06T23:28:42.923627181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:42.924748 containerd[1930]: time="2025-07-06T23:28:42.924702765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 6.311876671s" Jul 6 23:28:42.924949 containerd[1930]: time="2025-07-06T23:28:42.924920181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 6 23:28:42.952725 containerd[1930]: time="2025-07-06T23:28:42.952484938Z" level=info msg="CreateContainer within sandbox \"7d2a589b8da3ebdf1e66a965fab2979ba092a1df49e2cd5b85e70ca02dc83b95\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:28:42.979388 containerd[1930]: time="2025-07-06T23:28:42.979329826Z" level=info msg="Container c818b19960c2055ec2ff127c97d0e0366f176776c3cd4cd722ba2e71f7118ab9: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:42.988405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2676271309.mount: Deactivated successfully. Jul 6 23:28:43.013400 containerd[1930]: time="2025-07-06T23:28:43.013346406Z" level=info msg="CreateContainer within sandbox \"7d2a589b8da3ebdf1e66a965fab2979ba092a1df49e2cd5b85e70ca02dc83b95\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c818b19960c2055ec2ff127c97d0e0366f176776c3cd4cd722ba2e71f7118ab9\"" Jul 6 23:28:43.014541 containerd[1930]: time="2025-07-06T23:28:43.014380290Z" level=info msg="StartContainer for \"c818b19960c2055ec2ff127c97d0e0366f176776c3cd4cd722ba2e71f7118ab9\"" Jul 6 23:28:43.018361 containerd[1930]: time="2025-07-06T23:28:43.018145326Z" level=info msg="connecting to shim c818b19960c2055ec2ff127c97d0e0366f176776c3cd4cd722ba2e71f7118ab9" address="unix:///run/containerd/s/7e9b817bce14b31fb32b3f64c0fdacec88c471f4d4113c22babeaa2449129297" protocol=ttrpc version=3 Jul 6 23:28:43.056387 systemd[1]: Started cri-containerd-c818b19960c2055ec2ff127c97d0e0366f176776c3cd4cd722ba2e71f7118ab9.scope - libcontainer container c818b19960c2055ec2ff127c97d0e0366f176776c3cd4cd722ba2e71f7118ab9. Jul 6 23:28:43.145189 containerd[1930]: time="2025-07-06T23:28:43.145018111Z" level=info msg="StartContainer for \"c818b19960c2055ec2ff127c97d0e0366f176776c3cd4cd722ba2e71f7118ab9\" returns successfully" Jul 6 23:28:43.398440 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:28:43.398686 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:28:43.692679 kubelet[3280]: I0706 23:28:43.692584 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n6xqb" podStartSLOduration=2.250964644 podStartE2EDuration="17.692560461s" podCreationTimestamp="2025-07-06 23:28:26 +0000 UTC" firstStartedPulling="2025-07-06 23:28:27.485142473 +0000 UTC m=+28.436410606" lastFinishedPulling="2025-07-06 23:28:42.926738302 +0000 UTC m=+43.878006423" observedRunningTime="2025-07-06 23:28:43.692137497 +0000 UTC m=+44.643405654" watchObservedRunningTime="2025-07-06 23:28:43.692560461 +0000 UTC m=+44.643828618" Jul 6 23:28:43.703572 kubelet[3280]: I0706 23:28:43.703512 3280 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fd15636e-8113-4840-a994-881ea05acf18-whisker-backend-key-pair\") pod \"fd15636e-8113-4840-a994-881ea05acf18\" (UID: \"fd15636e-8113-4840-a994-881ea05acf18\") " Jul 6 23:28:43.703738 kubelet[3280]: I0706 23:28:43.703596 3280 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gms8j\" (UniqueName: \"kubernetes.io/projected/fd15636e-8113-4840-a994-881ea05acf18-kube-api-access-gms8j\") pod \"fd15636e-8113-4840-a994-881ea05acf18\" (UID: \"fd15636e-8113-4840-a994-881ea05acf18\") " Jul 6 23:28:43.703738 kubelet[3280]: I0706 23:28:43.703646 3280 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd15636e-8113-4840-a994-881ea05acf18-whisker-ca-bundle\") pod \"fd15636e-8113-4840-a994-881ea05acf18\" (UID: \"fd15636e-8113-4840-a994-881ea05acf18\") " Jul 6 23:28:43.705967 kubelet[3280]: I0706 23:28:43.705902 3280 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd15636e-8113-4840-a994-881ea05acf18-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fd15636e-8113-4840-a994-881ea05acf18" (UID: "fd15636e-8113-4840-a994-881ea05acf18"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:28:43.722034 kubelet[3280]: I0706 23:28:43.721963 3280 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd15636e-8113-4840-a994-881ea05acf18-kube-api-access-gms8j" (OuterVolumeSpecName: "kube-api-access-gms8j") pod "fd15636e-8113-4840-a994-881ea05acf18" (UID: "fd15636e-8113-4840-a994-881ea05acf18"). InnerVolumeSpecName "kube-api-access-gms8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:28:43.722975 kubelet[3280]: I0706 23:28:43.722913 3280 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd15636e-8113-4840-a994-881ea05acf18-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fd15636e-8113-4840-a994-881ea05acf18" (UID: "fd15636e-8113-4840-a994-881ea05acf18"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:28:43.804478 kubelet[3280]: I0706 23:28:43.804424 3280 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fd15636e-8113-4840-a994-881ea05acf18-whisker-backend-key-pair\") on node \"ip-172-31-26-116\" DevicePath \"\"" Jul 6 23:28:43.804478 kubelet[3280]: I0706 23:28:43.804479 3280 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gms8j\" (UniqueName: \"kubernetes.io/projected/fd15636e-8113-4840-a994-881ea05acf18-kube-api-access-gms8j\") on node \"ip-172-31-26-116\" DevicePath \"\"" Jul 6 23:28:43.804704 kubelet[3280]: I0706 23:28:43.804509 3280 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd15636e-8113-4840-a994-881ea05acf18-whisker-ca-bundle\") on node \"ip-172-31-26-116\" DevicePath \"\"" Jul 6 23:28:43.837885 systemd[1]: var-lib-kubelet-pods-fd15636e\x2d8113\x2d4840\x2da994\x2d881ea05acf18-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:28:43.838114 systemd[1]: var-lib-kubelet-pods-fd15636e\x2d8113\x2d4840\x2da994\x2d881ea05acf18-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgms8j.mount: Deactivated successfully. Jul 6 23:28:43.966165 systemd[1]: Removed slice kubepods-besteffort-podfd15636e_8113_4840_a994_881ea05acf18.slice - libcontainer container kubepods-besteffort-podfd15636e_8113_4840_a994_881ea05acf18.slice. Jul 6 23:28:44.073685 systemd[1]: Created slice kubepods-besteffort-pod303e6e95_4c8e_4088_8578_b039f7be25fa.slice - libcontainer container kubepods-besteffort-pod303e6e95_4c8e_4088_8578_b039f7be25fa.slice. Jul 6 23:28:44.107384 kubelet[3280]: I0706 23:28:44.107322 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/303e6e95-4c8e-4088-8578-b039f7be25fa-whisker-ca-bundle\") pod \"whisker-cccfb8b7f-lbtc5\" (UID: \"303e6e95-4c8e-4088-8578-b039f7be25fa\") " pod="calico-system/whisker-cccfb8b7f-lbtc5" Jul 6 23:28:44.107562 kubelet[3280]: I0706 23:28:44.107396 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/303e6e95-4c8e-4088-8578-b039f7be25fa-whisker-backend-key-pair\") pod \"whisker-cccfb8b7f-lbtc5\" (UID: \"303e6e95-4c8e-4088-8578-b039f7be25fa\") " pod="calico-system/whisker-cccfb8b7f-lbtc5" Jul 6 23:28:44.107562 kubelet[3280]: I0706 23:28:44.107455 3280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n85b\" (UniqueName: \"kubernetes.io/projected/303e6e95-4c8e-4088-8578-b039f7be25fa-kube-api-access-5n85b\") pod \"whisker-cccfb8b7f-lbtc5\" (UID: \"303e6e95-4c8e-4088-8578-b039f7be25fa\") " pod="calico-system/whisker-cccfb8b7f-lbtc5" Jul 6 23:28:44.382404 containerd[1930]: time="2025-07-06T23:28:44.382313889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cccfb8b7f-lbtc5,Uid:303e6e95-4c8e-4088-8578-b039f7be25fa,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:44.720931 (udev-worker)[4492]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:28:44.735832 systemd-networkd[1815]: calic5513cb8586: Link UP Jul 6 23:28:44.738643 systemd-networkd[1815]: calic5513cb8586: Gained carrier Jul 6 23:28:44.779416 containerd[1930]: 2025-07-06 23:28:44.434 [INFO][4523] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:28:44.779416 containerd[1930]: 2025-07-06 23:28:44.522 [INFO][4523] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--116-k8s-whisker--cccfb8b7f--lbtc5-eth0 whisker-cccfb8b7f- calico-system 303e6e95-4c8e-4088-8578-b039f7be25fa 917 0 2025-07-06 23:28:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:cccfb8b7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-26-116 whisker-cccfb8b7f-lbtc5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic5513cb8586 [] [] }} ContainerID="a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" Namespace="calico-system" Pod="whisker-cccfb8b7f-lbtc5" WorkloadEndpoint="ip--172--31--26--116-k8s-whisker--cccfb8b7f--lbtc5-" Jul 6 23:28:44.779416 containerd[1930]: 2025-07-06 23:28:44.522 [INFO][4523] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" Namespace="calico-system" Pod="whisker-cccfb8b7f-lbtc5" WorkloadEndpoint="ip--172--31--26--116-k8s-whisker--cccfb8b7f--lbtc5-eth0" Jul 6 23:28:44.779416 containerd[1930]: 2025-07-06 23:28:44.618 [INFO][4534] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" HandleID="k8s-pod-network.a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" Workload="ip--172--31--26--116-k8s-whisker--cccfb8b7f--lbtc5-eth0" Jul 6 23:28:44.779803 containerd[1930]: 2025-07-06 23:28:44.619 [INFO][4534] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" HandleID="k8s-pod-network.a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" Workload="ip--172--31--26--116-k8s-whisker--cccfb8b7f--lbtc5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003395f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-116", "pod":"whisker-cccfb8b7f-lbtc5", "timestamp":"2025-07-06 23:28:44.618818986 +0000 UTC"}, Hostname:"ip-172-31-26-116", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:28:44.779803 containerd[1930]: 2025-07-06 23:28:44.619 [INFO][4534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:28:44.779803 containerd[1930]: 2025-07-06 23:28:44.619 [INFO][4534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:28:44.779803 containerd[1930]: 2025-07-06 23:28:44.619 [INFO][4534] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-116' Jul 6 23:28:44.779803 containerd[1930]: 2025-07-06 23:28:44.634 [INFO][4534] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" host="ip-172-31-26-116" Jul 6 23:28:44.779803 containerd[1930]: 2025-07-06 23:28:44.643 [INFO][4534] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-116" Jul 6 23:28:44.779803 containerd[1930]: 2025-07-06 23:28:44.651 [INFO][4534] ipam/ipam.go 511: Trying affinity for 192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:44.779803 containerd[1930]: 2025-07-06 23:28:44.656 [INFO][4534] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:44.779803 containerd[1930]: 2025-07-06 23:28:44.664 [INFO][4534] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:44.780784 containerd[1930]: 2025-07-06 23:28:44.665 [INFO][4534] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" host="ip-172-31-26-116" Jul 6 23:28:44.780784 containerd[1930]: 2025-07-06 23:28:44.668 [INFO][4534] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f Jul 6 23:28:44.780784 containerd[1930]: 2025-07-06 23:28:44.682 [INFO][4534] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" host="ip-172-31-26-116" Jul 6 23:28:44.780784 containerd[1930]: 2025-07-06 23:28:44.697 [INFO][4534] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.193/26] block=192.168.95.192/26 handle="k8s-pod-network.a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" host="ip-172-31-26-116" Jul 6 23:28:44.780784 containerd[1930]: 2025-07-06 23:28:44.697 [INFO][4534] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.193/26] handle="k8s-pod-network.a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" host="ip-172-31-26-116" Jul 6 23:28:44.780784 containerd[1930]: 2025-07-06 23:28:44.697 [INFO][4534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:28:44.780784 containerd[1930]: 2025-07-06 23:28:44.697 [INFO][4534] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.193/26] IPv6=[] ContainerID="a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" HandleID="k8s-pod-network.a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" Workload="ip--172--31--26--116-k8s-whisker--cccfb8b7f--lbtc5-eth0" Jul 6 23:28:44.781180 containerd[1930]: 2025-07-06 23:28:44.705 [INFO][4523] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" Namespace="calico-system" Pod="whisker-cccfb8b7f-lbtc5" WorkloadEndpoint="ip--172--31--26--116-k8s-whisker--cccfb8b7f--lbtc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--116-k8s-whisker--cccfb8b7f--lbtc5-eth0", GenerateName:"whisker-cccfb8b7f-", Namespace:"calico-system", SelfLink:"", UID:"303e6e95-4c8e-4088-8578-b039f7be25fa", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cccfb8b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-116", ContainerID:"", Pod:"whisker-cccfb8b7f-lbtc5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.95.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic5513cb8586", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:44.781180 containerd[1930]: 2025-07-06 23:28:44.705 [INFO][4523] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.193/32] ContainerID="a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" Namespace="calico-system" Pod="whisker-cccfb8b7f-lbtc5" WorkloadEndpoint="ip--172--31--26--116-k8s-whisker--cccfb8b7f--lbtc5-eth0" Jul 6 23:28:44.781359 containerd[1930]: 2025-07-06 23:28:44.705 [INFO][4523] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5513cb8586 ContainerID="a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" Namespace="calico-system" Pod="whisker-cccfb8b7f-lbtc5" WorkloadEndpoint="ip--172--31--26--116-k8s-whisker--cccfb8b7f--lbtc5-eth0" Jul 6 23:28:44.781359 containerd[1930]: 2025-07-06 23:28:44.744 [INFO][4523] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" Namespace="calico-system" Pod="whisker-cccfb8b7f-lbtc5" WorkloadEndpoint="ip--172--31--26--116-k8s-whisker--cccfb8b7f--lbtc5-eth0" Jul 6 23:28:44.781456 containerd[1930]: 2025-07-06 23:28:44.746 [INFO][4523] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" Namespace="calico-system" Pod="whisker-cccfb8b7f-lbtc5" WorkloadEndpoint="ip--172--31--26--116-k8s-whisker--cccfb8b7f--lbtc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--116-k8s-whisker--cccfb8b7f--lbtc5-eth0", GenerateName:"whisker-cccfb8b7f-", Namespace:"calico-system", SelfLink:"", UID:"303e6e95-4c8e-4088-8578-b039f7be25fa", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cccfb8b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-116", ContainerID:"a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f", Pod:"whisker-cccfb8b7f-lbtc5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.95.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic5513cb8586", MAC:"16:9c:62:54:6d:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:44.781595 containerd[1930]: 2025-07-06 23:28:44.773 [INFO][4523] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" Namespace="calico-system" Pod="whisker-cccfb8b7f-lbtc5" WorkloadEndpoint="ip--172--31--26--116-k8s-whisker--cccfb8b7f--lbtc5-eth0" Jul 6 23:28:44.841728 containerd[1930]: time="2025-07-06T23:28:44.841556735Z" level=info msg="connecting to shim a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f" address="unix:///run/containerd/s/cb27c9d23d863a1ba8a7b388405499c51397aca0bde6476f1da482afec33bce8" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:44.901387 systemd[1]: Started cri-containerd-a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f.scope - libcontainer container a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f. Jul 6 23:28:44.906752 containerd[1930]: time="2025-07-06T23:28:44.906680867Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c818b19960c2055ec2ff127c97d0e0366f176776c3cd4cd722ba2e71f7118ab9\" id:\"1e932d421ff810b22d95194a511db8268d65b6ba093ec9d60cec84bc755d4279\" pid:4555 exit_status:1 exited_at:{seconds:1751844524 nanos:905958527}" Jul 6 23:28:44.997335 containerd[1930]: time="2025-07-06T23:28:44.996763860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cccfb8b7f-lbtc5,Uid:303e6e95-4c8e-4088-8578-b039f7be25fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f\"" Jul 6 23:28:45.001806 containerd[1930]: time="2025-07-06T23:28:45.001639136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:28:45.316352 kubelet[3280]: I0706 23:28:45.316238 3280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd15636e-8113-4840-a994-881ea05acf18" path="/var/lib/kubelet/pods/fd15636e-8113-4840-a994-881ea05acf18/volumes" Jul 6 23:28:45.981825 containerd[1930]: time="2025-07-06T23:28:45.981702061Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c818b19960c2055ec2ff127c97d0e0366f176776c3cd4cd722ba2e71f7118ab9\" id:\"f6daa1142b352337e75bdb77ccb2db1b4196a4ad89c19e28f291abc2983216ea\" pid:4721 exit_status:1 exited_at:{seconds:1751844525 nanos:981317461}" Jul 6 23:28:46.302493 containerd[1930]: time="2025-07-06T23:28:46.302335222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:46.306238 containerd[1930]: time="2025-07-06T23:28:46.306173710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 6 23:28:46.310063 containerd[1930]: time="2025-07-06T23:28:46.309080974Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:46.314420 containerd[1930]: time="2025-07-06T23:28:46.314346646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:46.316486 containerd[1930]: time="2025-07-06T23:28:46.316403794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.314620622s" Jul 6 23:28:46.316633 containerd[1930]: time="2025-07-06T23:28:46.316486414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 6 23:28:46.325198 containerd[1930]: time="2025-07-06T23:28:46.324037726Z" level=info msg="CreateContainer within sandbox \"a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:28:46.352083 containerd[1930]: time="2025-07-06T23:28:46.347349082Z" level=info msg="Container e57578e5c1d3ed12feeac42a46695d3bab51657dd655d38a8a0585b5be035da6: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:46.373888 containerd[1930]: time="2025-07-06T23:28:46.373510139Z" level=info msg="CreateContainer within sandbox \"a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e57578e5c1d3ed12feeac42a46695d3bab51657dd655d38a8a0585b5be035da6\"" Jul 6 23:28:46.377808 containerd[1930]: time="2025-07-06T23:28:46.377745443Z" level=info msg="StartContainer for \"e57578e5c1d3ed12feeac42a46695d3bab51657dd655d38a8a0585b5be035da6\"" Jul 6 23:28:46.381204 containerd[1930]: time="2025-07-06T23:28:46.381004943Z" level=info msg="connecting to shim e57578e5c1d3ed12feeac42a46695d3bab51657dd655d38a8a0585b5be035da6" address="unix:///run/containerd/s/cb27c9d23d863a1ba8a7b388405499c51397aca0bde6476f1da482afec33bce8" protocol=ttrpc version=3 Jul 6 23:28:46.431364 systemd[1]: Started cri-containerd-e57578e5c1d3ed12feeac42a46695d3bab51657dd655d38a8a0585b5be035da6.scope - libcontainer container e57578e5c1d3ed12feeac42a46695d3bab51657dd655d38a8a0585b5be035da6. Jul 6 23:28:46.443556 systemd-networkd[1815]: vxlan.calico: Link UP Jul 6 23:28:46.443570 systemd-networkd[1815]: vxlan.calico: Gained carrier Jul 6 23:28:46.533032 (udev-worker)[4494]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:28:46.550666 systemd-networkd[1815]: calic5513cb8586: Gained IPv6LL Jul 6 23:28:46.610218 containerd[1930]: time="2025-07-06T23:28:46.609357420Z" level=info msg="StartContainer for \"e57578e5c1d3ed12feeac42a46695d3bab51657dd655d38a8a0585b5be035da6\" returns successfully" Jul 6 23:28:46.613772 containerd[1930]: time="2025-07-06T23:28:46.613703976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:28:47.636791 systemd-networkd[1815]: vxlan.calico: Gained IPv6LL Jul 6 23:28:48.306916 containerd[1930]: time="2025-07-06T23:28:48.306865764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w7q6j,Uid:eb5c3d94-8803-44a1-af17-51b481acd517,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:48.308953 containerd[1930]: time="2025-07-06T23:28:48.307115820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7799487779-lslkv,Uid:c3904aef-10f2-4f16-bf54-789fb0de513d,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:28:48.310450 containerd[1930]: time="2025-07-06T23:28:48.310401564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7799487779-zg2ph,Uid:ecbe8d15-6e49-4751-89fa-dcb7a48b9dab,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:28:48.860978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3453387767.mount: Deactivated successfully. Jul 6 23:28:48.861226 (udev-worker)[4829]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:28:48.866722 systemd-networkd[1815]: calib085186d536: Link UP Jul 6 23:28:48.870897 systemd-networkd[1815]: calib085186d536: Gained carrier Jul 6 23:28:48.914273 containerd[1930]: 2025-07-06 23:28:48.516 [INFO][4892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--116-k8s-calico--apiserver--7799487779--zg2ph-eth0 calico-apiserver-7799487779- calico-apiserver ecbe8d15-6e49-4751-89fa-dcb7a48b9dab 849 0 2025-07-06 23:28:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7799487779 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-116 calico-apiserver-7799487779-zg2ph eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib085186d536 [] [] }} ContainerID="1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" Namespace="calico-apiserver" Pod="calico-apiserver-7799487779-zg2ph" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--apiserver--7799487779--zg2ph-" Jul 6 23:28:48.914273 containerd[1930]: 2025-07-06 23:28:48.516 [INFO][4892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" Namespace="calico-apiserver" Pod="calico-apiserver-7799487779-zg2ph" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--apiserver--7799487779--zg2ph-eth0" Jul 6 23:28:48.914273 containerd[1930]: 2025-07-06 23:28:48.727 [INFO][4922] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" HandleID="k8s-pod-network.1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" Workload="ip--172--31--26--116-k8s-calico--apiserver--7799487779--zg2ph-eth0" Jul 6 23:28:48.914596 containerd[1930]: 2025-07-06 23:28:48.727 [INFO][4922] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" HandleID="k8s-pod-network.1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" Workload="ip--172--31--26--116-k8s-calico--apiserver--7799487779--zg2ph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103c30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-116", "pod":"calico-apiserver-7799487779-zg2ph", "timestamp":"2025-07-06 23:28:48.727165178 +0000 UTC"}, Hostname:"ip-172-31-26-116", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:28:48.914596 containerd[1930]: 2025-07-06 23:28:48.727 [INFO][4922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:28:48.914596 containerd[1930]: 2025-07-06 23:28:48.727 [INFO][4922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:28:48.914596 containerd[1930]: 2025-07-06 23:28:48.728 [INFO][4922] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-116' Jul 6 23:28:48.914596 containerd[1930]: 2025-07-06 23:28:48.753 [INFO][4922] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" host="ip-172-31-26-116" Jul 6 23:28:48.914596 containerd[1930]: 2025-07-06 23:28:48.766 [INFO][4922] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-116" Jul 6 23:28:48.914596 containerd[1930]: 2025-07-06 23:28:48.778 [INFO][4922] ipam/ipam.go 511: Trying affinity for 192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:48.914596 containerd[1930]: 2025-07-06 23:28:48.784 [INFO][4922] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:48.914596 containerd[1930]: 2025-07-06 23:28:48.793 [INFO][4922] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:48.915101 containerd[1930]: 2025-07-06 23:28:48.794 [INFO][4922] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" host="ip-172-31-26-116" Jul 6 23:28:48.915101 containerd[1930]: 2025-07-06 23:28:48.799 [INFO][4922] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c Jul 6 23:28:48.915101 containerd[1930]: 2025-07-06 23:28:48.810 [INFO][4922] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" host="ip-172-31-26-116" Jul 6 23:28:48.915101 containerd[1930]: 2025-07-06 23:28:48.831 [INFO][4922] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.194/26] block=192.168.95.192/26 handle="k8s-pod-network.1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" host="ip-172-31-26-116" Jul 6 23:28:48.915101 containerd[1930]: 2025-07-06 23:28:48.831 [INFO][4922] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.194/26] handle="k8s-pod-network.1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" host="ip-172-31-26-116" Jul 6 23:28:48.915101 containerd[1930]: 2025-07-06 23:28:48.831 [INFO][4922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:28:48.915101 containerd[1930]: 2025-07-06 23:28:48.831 [INFO][4922] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.194/26] IPv6=[] ContainerID="1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" HandleID="k8s-pod-network.1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" Workload="ip--172--31--26--116-k8s-calico--apiserver--7799487779--zg2ph-eth0" Jul 6 23:28:48.915467 containerd[1930]: 2025-07-06 23:28:48.845 [INFO][4892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" Namespace="calico-apiserver" Pod="calico-apiserver-7799487779-zg2ph" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--apiserver--7799487779--zg2ph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--116-k8s-calico--apiserver--7799487779--zg2ph-eth0", GenerateName:"calico-apiserver-7799487779-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecbe8d15-6e49-4751-89fa-dcb7a48b9dab", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7799487779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-116", ContainerID:"", Pod:"calico-apiserver-7799487779-zg2ph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib085186d536", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:48.915604 containerd[1930]: 2025-07-06 23:28:48.847 [INFO][4892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.194/32] ContainerID="1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" Namespace="calico-apiserver" Pod="calico-apiserver-7799487779-zg2ph" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--apiserver--7799487779--zg2ph-eth0" Jul 6 23:28:48.915604 containerd[1930]: 2025-07-06 23:28:48.848 [INFO][4892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib085186d536 ContainerID="1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" Namespace="calico-apiserver" Pod="calico-apiserver-7799487779-zg2ph" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--apiserver--7799487779--zg2ph-eth0" Jul 6 23:28:48.915604 containerd[1930]: 2025-07-06 23:28:48.870 [INFO][4892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" Namespace="calico-apiserver" Pod="calico-apiserver-7799487779-zg2ph" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--apiserver--7799487779--zg2ph-eth0" Jul 6 23:28:48.915746 containerd[1930]: 2025-07-06 23:28:48.872 [INFO][4892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" Namespace="calico-apiserver" Pod="calico-apiserver-7799487779-zg2ph" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--apiserver--7799487779--zg2ph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--116-k8s-calico--apiserver--7799487779--zg2ph-eth0", GenerateName:"calico-apiserver-7799487779-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecbe8d15-6e49-4751-89fa-dcb7a48b9dab", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7799487779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-116", ContainerID:"1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c", Pod:"calico-apiserver-7799487779-zg2ph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib085186d536", MAC:"7e:ba:71:63:02:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:48.915916 containerd[1930]: 2025-07-06 23:28:48.903 [INFO][4892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" Namespace="calico-apiserver" Pod="calico-apiserver-7799487779-zg2ph" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--apiserver--7799487779--zg2ph-eth0" Jul 6 23:28:48.947863 containerd[1930]: time="2025-07-06T23:28:48.946903047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:48.952351 containerd[1930]: time="2025-07-06T23:28:48.952290639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 6 23:28:48.959078 containerd[1930]: time="2025-07-06T23:28:48.958080087Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:48.986569 containerd[1930]: time="2025-07-06T23:28:48.986491048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:48.998337 containerd[1930]: time="2025-07-06T23:28:48.996916084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 2.383145616s" Jul 6 23:28:48.998337 containerd[1930]: time="2025-07-06T23:28:48.997922608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 6 23:28:49.019995 containerd[1930]: time="2025-07-06T23:28:49.019327512Z" level=info msg="connecting to shim 1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c" address="unix:///run/containerd/s/e93e6a4019c561dc4f3d02178277b7e465025f5aaafc7bbbe7848a11d6e90688" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:49.026846 systemd-networkd[1815]: caliab503c460e1: Link UP Jul 6 23:28:49.029654 systemd-networkd[1815]: caliab503c460e1: Gained carrier Jul 6 23:28:49.034794 containerd[1930]: time="2025-07-06T23:28:49.033401532Z" level=info msg="CreateContainer within sandbox \"a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:28:49.075955 containerd[1930]: time="2025-07-06T23:28:49.075771120Z" level=info msg="Container 11041586c96cf40c3d0088b189d6e42cb5c828b1362c12f162ef03eb407f3ce2: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:49.100810 containerd[1930]: 2025-07-06 23:28:48.515 [INFO][4879] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--116-k8s-calico--apiserver--7799487779--lslkv-eth0 calico-apiserver-7799487779- calico-apiserver c3904aef-10f2-4f16-bf54-789fb0de513d 851 0 2025-07-06 23:28:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7799487779 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-116 calico-apiserver-7799487779-lslkv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliab503c460e1 [] [] }} ContainerID="b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" Namespace="calico-apiserver" Pod="calico-apiserver-7799487779-lslkv" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--apiserver--7799487779--lslkv-" Jul 6 23:28:49.100810 containerd[1930]: 2025-07-06 23:28:48.516 [INFO][4879] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" Namespace="calico-apiserver" Pod="calico-apiserver-7799487779-lslkv" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--apiserver--7799487779--lslkv-eth0" Jul 6 23:28:49.100810 containerd[1930]: 2025-07-06 23:28:48.740 [INFO][4920] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" HandleID="k8s-pod-network.b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" Workload="ip--172--31--26--116-k8s-calico--apiserver--7799487779--lslkv-eth0" Jul 6 23:28:49.101913 containerd[1930]: 2025-07-06 23:28:48.741 [INFO][4920] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" HandleID="k8s-pod-network.b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" Workload="ip--172--31--26--116-k8s-calico--apiserver--7799487779--lslkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031bbc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-116", "pod":"calico-apiserver-7799487779-lslkv", "timestamp":"2025-07-06 23:28:48.740312282 +0000 UTC"}, Hostname:"ip-172-31-26-116", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:28:49.101913 containerd[1930]: 2025-07-06 23:28:48.741 [INFO][4920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:28:49.101913 containerd[1930]: 2025-07-06 23:28:48.831 [INFO][4920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:28:49.101913 containerd[1930]: 2025-07-06 23:28:48.832 [INFO][4920] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-116' Jul 6 23:28:49.101913 containerd[1930]: 2025-07-06 23:28:48.867 [INFO][4920] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" host="ip-172-31-26-116" Jul 6 23:28:49.101913 containerd[1930]: 2025-07-06 23:28:48.890 [INFO][4920] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-116" Jul 6 23:28:49.101913 containerd[1930]: 2025-07-06 23:28:48.918 [INFO][4920] ipam/ipam.go 511: Trying affinity for 192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:49.101913 containerd[1930]: 2025-07-06 23:28:48.922 [INFO][4920] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:49.101913 containerd[1930]: 2025-07-06 23:28:48.931 [INFO][4920] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:49.103984 containerd[1930]: 2025-07-06 23:28:48.931 [INFO][4920] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" host="ip-172-31-26-116" Jul 6 23:28:49.103984 containerd[1930]: 2025-07-06 23:28:48.936 [INFO][4920] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544 Jul 6 23:28:49.103984 containerd[1930]: 2025-07-06 23:28:48.953 [INFO][4920] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" host="ip-172-31-26-116" Jul 6 23:28:49.103984 containerd[1930]: 2025-07-06 23:28:48.980 [INFO][4920] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.195/26] block=192.168.95.192/26 handle="k8s-pod-network.b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" host="ip-172-31-26-116" Jul 6 23:28:49.103984 containerd[1930]: 2025-07-06 23:28:48.980 [INFO][4920] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.195/26] handle="k8s-pod-network.b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" host="ip-172-31-26-116" Jul 6 23:28:49.103984 containerd[1930]: 2025-07-06 23:28:48.981 [INFO][4920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:28:49.103984 containerd[1930]: 2025-07-06 23:28:48.981 [INFO][4920] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.195/26] IPv6=[] ContainerID="b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" HandleID="k8s-pod-network.b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" Workload="ip--172--31--26--116-k8s-calico--apiserver--7799487779--lslkv-eth0" Jul 6 23:28:49.106210 containerd[1930]: 2025-07-06 23:28:49.008 [INFO][4879] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" Namespace="calico-apiserver" Pod="calico-apiserver-7799487779-lslkv" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--apiserver--7799487779--lslkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--116-k8s-calico--apiserver--7799487779--lslkv-eth0", GenerateName:"calico-apiserver-7799487779-", Namespace:"calico-apiserver", SelfLink:"", UID:"c3904aef-10f2-4f16-bf54-789fb0de513d", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7799487779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-116", ContainerID:"", Pod:"calico-apiserver-7799487779-lslkv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab503c460e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:49.106367 containerd[1930]: 2025-07-06 23:28:49.008 [INFO][4879] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.195/32] ContainerID="b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" Namespace="calico-apiserver" Pod="calico-apiserver-7799487779-lslkv" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--apiserver--7799487779--lslkv-eth0" Jul 6 23:28:49.106367 containerd[1930]: 2025-07-06 23:28:49.008 [INFO][4879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab503c460e1 ContainerID="b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" Namespace="calico-apiserver" Pod="calico-apiserver-7799487779-lslkv" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--apiserver--7799487779--lslkv-eth0" Jul 6 23:28:49.106367 containerd[1930]: 2025-07-06 23:28:49.034 [INFO][4879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" Namespace="calico-apiserver" Pod="calico-apiserver-7799487779-lslkv" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--apiserver--7799487779--lslkv-eth0" Jul 6 23:28:49.106499 containerd[1930]: 2025-07-06 23:28:49.036 [INFO][4879] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" Namespace="calico-apiserver" Pod="calico-apiserver-7799487779-lslkv" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--apiserver--7799487779--lslkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--116-k8s-calico--apiserver--7799487779--lslkv-eth0", GenerateName:"calico-apiserver-7799487779-", Namespace:"calico-apiserver", SelfLink:"", UID:"c3904aef-10f2-4f16-bf54-789fb0de513d", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7799487779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-116", ContainerID:"b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544", Pod:"calico-apiserver-7799487779-lslkv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab503c460e1", MAC:"46:6b:7e:0e:e4:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:49.106623 containerd[1930]: 2025-07-06 23:28:49.073 [INFO][4879] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" Namespace="calico-apiserver" Pod="calico-apiserver-7799487779-lslkv" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--apiserver--7799487779--lslkv-eth0" Jul 6 23:28:49.117506 containerd[1930]: time="2025-07-06T23:28:49.115467768Z" level=info msg="CreateContainer within sandbox \"a9d977905618fae0e24b09e45c23d23020b8fd143be5bdbffa6213ba48908d4f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"11041586c96cf40c3d0088b189d6e42cb5c828b1362c12f162ef03eb407f3ce2\"" Jul 6 23:28:49.119285 containerd[1930]: time="2025-07-06T23:28:49.118889064Z" level=info msg="StartContainer for \"11041586c96cf40c3d0088b189d6e42cb5c828b1362c12f162ef03eb407f3ce2\"" Jul 6 23:28:49.124701 containerd[1930]: time="2025-07-06T23:28:49.124511652Z" level=info msg="connecting to shim 11041586c96cf40c3d0088b189d6e42cb5c828b1362c12f162ef03eb407f3ce2" address="unix:///run/containerd/s/cb27c9d23d863a1ba8a7b388405499c51397aca0bde6476f1da482afec33bce8" protocol=ttrpc version=3 Jul 6 23:28:49.207852 systemd[1]: Started cri-containerd-1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c.scope - libcontainer container 1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c. Jul 6 23:28:49.224359 containerd[1930]: time="2025-07-06T23:28:49.224279209Z" level=info msg="connecting to shim b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544" address="unix:///run/containerd/s/349e68542d7b93f9955137bbb7eb18c0cb4fa128ce020ea1b49e746fa884d0e4" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:49.269570 systemd-networkd[1815]: cali3f8a8f71f9c: Link UP Jul 6 23:28:49.279254 systemd-networkd[1815]: cali3f8a8f71f9c: Gained carrier Jul 6 23:28:49.298432 systemd[1]: Started cri-containerd-11041586c96cf40c3d0088b189d6e42cb5c828b1362c12f162ef03eb407f3ce2.scope - libcontainer container 11041586c96cf40c3d0088b189d6e42cb5c828b1362c12f162ef03eb407f3ce2. Jul 6 23:28:49.311556 containerd[1930]: time="2025-07-06T23:28:49.311203165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64bbcddc7c-s49wp,Uid:fc56821c-5139-4749-84ae-639dc6e9b07f,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:49.332419 containerd[1930]: 2025-07-06 23:28:48.561 [INFO][4902] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--116-k8s-coredns--7c65d6cfc9--w7q6j-eth0 coredns-7c65d6cfc9- kube-system eb5c3d94-8803-44a1-af17-51b481acd517 839 0 2025-07-06 23:28:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-116 coredns-7c65d6cfc9-w7q6j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3f8a8f71f9c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w7q6j" WorkloadEndpoint="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--w7q6j-" Jul 6 23:28:49.332419 containerd[1930]: 2025-07-06 23:28:48.561 [INFO][4902] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w7q6j" WorkloadEndpoint="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--w7q6j-eth0" Jul 6 23:28:49.332419 containerd[1930]: 2025-07-06 23:28:48.767 [INFO][4930] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" HandleID="k8s-pod-network.119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" Workload="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--w7q6j-eth0" Jul 6 23:28:49.332922 containerd[1930]: 2025-07-06 23:28:48.768 [INFO][4930] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" HandleID="k8s-pod-network.119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" Workload="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--w7q6j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038a0f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-116", "pod":"coredns-7c65d6cfc9-w7q6j", "timestamp":"2025-07-06 23:28:48.767856111 +0000 UTC"}, Hostname:"ip-172-31-26-116", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:28:49.332922 containerd[1930]: 2025-07-06 23:28:48.768 [INFO][4930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:28:49.332922 containerd[1930]: 2025-07-06 23:28:48.983 [INFO][4930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:28:49.332922 containerd[1930]: 2025-07-06 23:28:48.988 [INFO][4930] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-116' Jul 6 23:28:49.332922 containerd[1930]: 2025-07-06 23:28:49.058 [INFO][4930] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" host="ip-172-31-26-116" Jul 6 23:28:49.332922 containerd[1930]: 2025-07-06 23:28:49.095 [INFO][4930] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-116" Jul 6 23:28:49.332922 containerd[1930]: 2025-07-06 23:28:49.130 [INFO][4930] ipam/ipam.go 511: Trying affinity for 192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:49.332922 containerd[1930]: 2025-07-06 23:28:49.140 [INFO][4930] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:49.332922 containerd[1930]: 2025-07-06 23:28:49.151 [INFO][4930] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:49.334868 containerd[1930]: 2025-07-06 23:28:49.151 [INFO][4930] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" host="ip-172-31-26-116" Jul 6 23:28:49.334868 containerd[1930]: 2025-07-06 23:28:49.161 [INFO][4930] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d Jul 6 23:28:49.334868 containerd[1930]: 2025-07-06 23:28:49.174 [INFO][4930] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" host="ip-172-31-26-116" Jul 6 23:28:49.334868 containerd[1930]: 2025-07-06 23:28:49.194 [INFO][4930] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.196/26] block=192.168.95.192/26 handle="k8s-pod-network.119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" host="ip-172-31-26-116" Jul 6 23:28:49.334868 containerd[1930]: 2025-07-06 23:28:49.197 [INFO][4930] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.196/26] handle="k8s-pod-network.119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" host="ip-172-31-26-116" Jul 6 23:28:49.334868 containerd[1930]: 2025-07-06 23:28:49.197 [INFO][4930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:28:49.334868 containerd[1930]: 2025-07-06 23:28:49.199 [INFO][4930] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.196/26] IPv6=[] ContainerID="119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" HandleID="k8s-pod-network.119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" Workload="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--w7q6j-eth0" Jul 6 23:28:49.335265 containerd[1930]: 2025-07-06 23:28:49.226 [INFO][4902] cni-plugin/k8s.go 418: Populated endpoint ContainerID="119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w7q6j" WorkloadEndpoint="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--w7q6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--116-k8s-coredns--7c65d6cfc9--w7q6j-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb5c3d94-8803-44a1-af17-51b481acd517", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-116", ContainerID:"", Pod:"coredns-7c65d6cfc9-w7q6j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f8a8f71f9c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:49.335265 containerd[1930]: 2025-07-06 23:28:49.227 [INFO][4902] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.196/32] ContainerID="119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w7q6j" WorkloadEndpoint="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--w7q6j-eth0" Jul 6 23:28:49.335265 containerd[1930]: 2025-07-06 23:28:49.227 [INFO][4902] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f8a8f71f9c ContainerID="119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w7q6j" WorkloadEndpoint="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--w7q6j-eth0" Jul 6 23:28:49.335265 containerd[1930]: 2025-07-06 23:28:49.271 [INFO][4902] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w7q6j" WorkloadEndpoint="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--w7q6j-eth0" Jul 6 23:28:49.335265 containerd[1930]: 2025-07-06 23:28:49.272 [INFO][4902] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w7q6j" WorkloadEndpoint="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--w7q6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--116-k8s-coredns--7c65d6cfc9--w7q6j-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb5c3d94-8803-44a1-af17-51b481acd517", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-116", ContainerID:"119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d", Pod:"coredns-7c65d6cfc9-w7q6j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f8a8f71f9c", MAC:"2e:0a:fc:68:6f:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:49.335265 containerd[1930]: 2025-07-06 23:28:49.304 [INFO][4902] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w7q6j" WorkloadEndpoint="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--w7q6j-eth0" Jul 6 23:28:49.394412 systemd[1]: Started cri-containerd-b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544.scope - libcontainer container b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544. Jul 6 23:28:49.496204 containerd[1930]: time="2025-07-06T23:28:49.494892758Z" level=info msg="connecting to shim 119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d" address="unix:///run/containerd/s/f972836e4a423265afb0240f535d58255b1ebee4ff4e7c9a36ec7a3a943ba68a" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:49.604181 systemd[1]: Started cri-containerd-119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d.scope - libcontainer container 119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d. Jul 6 23:28:49.840772 containerd[1930]: time="2025-07-06T23:28:49.840698992Z" level=info msg="StartContainer for \"11041586c96cf40c3d0088b189d6e42cb5c828b1362c12f162ef03eb407f3ce2\" returns successfully" Jul 6 23:28:49.931063 containerd[1930]: time="2025-07-06T23:28:49.930895804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7799487779-zg2ph,Uid:ecbe8d15-6e49-4751-89fa-dcb7a48b9dab,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c\"" Jul 6 23:28:49.959443 containerd[1930]: time="2025-07-06T23:28:49.959139952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:28:49.964975 containerd[1930]: time="2025-07-06T23:28:49.963978544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w7q6j,Uid:eb5c3d94-8803-44a1-af17-51b481acd517,Namespace:kube-system,Attempt:0,} returns sandbox id \"119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d\"" Jul 6 23:28:50.001993 containerd[1930]: time="2025-07-06T23:28:50.001912597Z" level=info msg="CreateContainer within sandbox \"119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:28:50.052599 containerd[1930]: time="2025-07-06T23:28:50.052386553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7799487779-lslkv,Uid:c3904aef-10f2-4f16-bf54-789fb0de513d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544\"" Jul 6 23:28:50.086139 containerd[1930]: time="2025-07-06T23:28:50.084389437Z" level=info msg="Container 6a6633aaf4548d0f6539c06a7eae7ded21bd241150152fb994e8af84600b2c34: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:50.099326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount226176978.mount: Deactivated successfully. Jul 6 23:28:50.138414 containerd[1930]: time="2025-07-06T23:28:50.138334333Z" level=info msg="CreateContainer within sandbox \"119b50477399da169ac65a94fdaeb11fdb16d1ce5c1ce5adece30162d275fb5d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6a6633aaf4548d0f6539c06a7eae7ded21bd241150152fb994e8af84600b2c34\"" Jul 6 23:28:50.145480 containerd[1930]: time="2025-07-06T23:28:50.144534673Z" level=info msg="StartContainer for \"6a6633aaf4548d0f6539c06a7eae7ded21bd241150152fb994e8af84600b2c34\"" Jul 6 23:28:50.157614 systemd-networkd[1815]: calia2a334577ef: Link UP Jul 6 23:28:50.159654 containerd[1930]: time="2025-07-06T23:28:50.159479977Z" level=info msg="connecting to shim 6a6633aaf4548d0f6539c06a7eae7ded21bd241150152fb994e8af84600b2c34" address="unix:///run/containerd/s/f972836e4a423265afb0240f535d58255b1ebee4ff4e7c9a36ec7a3a943ba68a" protocol=ttrpc version=3 Jul 6 23:28:50.161368 systemd-networkd[1815]: calia2a334577ef: Gained carrier Jul 6 23:28:50.196979 systemd-networkd[1815]: calib085186d536: Gained IPv6LL Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:49.603 [INFO][5049] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--116-k8s-calico--kube--controllers--64bbcddc7c--s49wp-eth0 calico-kube-controllers-64bbcddc7c- calico-system fc56821c-5139-4749-84ae-639dc6e9b07f 843 0 2025-07-06 23:28:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64bbcddc7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-116 calico-kube-controllers-64bbcddc7c-s49wp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia2a334577ef [] [] }} ContainerID="1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" Namespace="calico-system" Pod="calico-kube-controllers-64bbcddc7c-s49wp" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--kube--controllers--64bbcddc7c--s49wp-" Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:49.606 [INFO][5049] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" Namespace="calico-system" Pod="calico-kube-controllers-64bbcddc7c-s49wp" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--kube--controllers--64bbcddc7c--s49wp-eth0" Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:49.816 [INFO][5120] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" HandleID="k8s-pod-network.1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" Workload="ip--172--31--26--116-k8s-calico--kube--controllers--64bbcddc7c--s49wp-eth0" Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:49.820 [INFO][5120] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" HandleID="k8s-pod-network.1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" Workload="ip--172--31--26--116-k8s-calico--kube--controllers--64bbcddc7c--s49wp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032a220), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-116", "pod":"calico-kube-controllers-64bbcddc7c-s49wp", "timestamp":"2025-07-06 23:28:49.816661648 +0000 UTC"}, Hostname:"ip-172-31-26-116", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:49.829 [INFO][5120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:49.829 [INFO][5120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:49.829 [INFO][5120] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-116' Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:49.900 [INFO][5120] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" host="ip-172-31-26-116" Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:49.953 [INFO][5120] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-116" Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:49.997 [INFO][5120] ipam/ipam.go 511: Trying affinity for 192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:50.009 [INFO][5120] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:50.028 [INFO][5120] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:50.029 [INFO][5120] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" host="ip-172-31-26-116" Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:50.036 [INFO][5120] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4 Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:50.069 [INFO][5120] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" host="ip-172-31-26-116" Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:50.111 [INFO][5120] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.197/26] block=192.168.95.192/26 handle="k8s-pod-network.1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" host="ip-172-31-26-116" Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:50.112 [INFO][5120] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.197/26] handle="k8s-pod-network.1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" host="ip-172-31-26-116" Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:50.114 [INFO][5120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:28:50.208450 containerd[1930]: 2025-07-06 23:28:50.115 [INFO][5120] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.197/26] IPv6=[] ContainerID="1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" HandleID="k8s-pod-network.1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" Workload="ip--172--31--26--116-k8s-calico--kube--controllers--64bbcddc7c--s49wp-eth0" Jul 6 23:28:50.209937 containerd[1930]: 2025-07-06 23:28:50.141 [INFO][5049] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" Namespace="calico-system" Pod="calico-kube-controllers-64bbcddc7c-s49wp" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--kube--controllers--64bbcddc7c--s49wp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--116-k8s-calico--kube--controllers--64bbcddc7c--s49wp-eth0", GenerateName:"calico-kube-controllers-64bbcddc7c-", Namespace:"calico-system", SelfLink:"", UID:"fc56821c-5139-4749-84ae-639dc6e9b07f", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64bbcddc7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-116", ContainerID:"", Pod:"calico-kube-controllers-64bbcddc7c-s49wp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2a334577ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:50.209937 containerd[1930]: 2025-07-06 23:28:50.141 [INFO][5049] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.197/32] ContainerID="1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" Namespace="calico-system" Pod="calico-kube-controllers-64bbcddc7c-s49wp" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--kube--controllers--64bbcddc7c--s49wp-eth0" Jul 6 23:28:50.209937 containerd[1930]: 2025-07-06 23:28:50.141 [INFO][5049] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2a334577ef ContainerID="1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" Namespace="calico-system" Pod="calico-kube-controllers-64bbcddc7c-s49wp" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--kube--controllers--64bbcddc7c--s49wp-eth0" Jul 6 23:28:50.209937 containerd[1930]: 2025-07-06 23:28:50.166 [INFO][5049] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" Namespace="calico-system" Pod="calico-kube-controllers-64bbcddc7c-s49wp" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--kube--controllers--64bbcddc7c--s49wp-eth0" Jul 6 23:28:50.209937 containerd[1930]: 2025-07-06 23:28:50.167 [INFO][5049] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" Namespace="calico-system" Pod="calico-kube-controllers-64bbcddc7c-s49wp" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--kube--controllers--64bbcddc7c--s49wp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--116-k8s-calico--kube--controllers--64bbcddc7c--s49wp-eth0", GenerateName:"calico-kube-controllers-64bbcddc7c-", Namespace:"calico-system", SelfLink:"", UID:"fc56821c-5139-4749-84ae-639dc6e9b07f", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64bbcddc7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-116", ContainerID:"1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4", Pod:"calico-kube-controllers-64bbcddc7c-s49wp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2a334577ef", MAC:"be:3f:95:7f:09:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:50.209937 containerd[1930]: 2025-07-06 23:28:50.189 [INFO][5049] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" Namespace="calico-system" Pod="calico-kube-controllers-64bbcddc7c-s49wp" WorkloadEndpoint="ip--172--31--26--116-k8s-calico--kube--controllers--64bbcddc7c--s49wp-eth0" Jul 6 23:28:50.287408 systemd[1]: Started cri-containerd-6a6633aaf4548d0f6539c06a7eae7ded21bd241150152fb994e8af84600b2c34.scope - libcontainer container 6a6633aaf4548d0f6539c06a7eae7ded21bd241150152fb994e8af84600b2c34. Jul 6 23:28:50.310936 containerd[1930]: time="2025-07-06T23:28:50.310862378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hgfgz,Uid:d0e807e9-0b9a-42a8-89c1-7d48e397dd4f,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:50.319697 containerd[1930]: time="2025-07-06T23:28:50.319596530Z" level=info msg="connecting to shim 1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4" address="unix:///run/containerd/s/82c436400fce71d2677b4924dccc82ee9b72cabad5abdf274f501ace22faa91b" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:50.400179 systemd[1]: Started cri-containerd-1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4.scope - libcontainer container 1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4. Jul 6 23:28:50.450068 containerd[1930]: time="2025-07-06T23:28:50.450000483Z" level=info msg="StartContainer for \"6a6633aaf4548d0f6539c06a7eae7ded21bd241150152fb994e8af84600b2c34\" returns successfully" Jul 6 23:28:50.452321 systemd-networkd[1815]: cali3f8a8f71f9c: Gained IPv6LL Jul 6 23:28:50.595077 containerd[1930]: time="2025-07-06T23:28:50.594546016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64bbcddc7c-s49wp,Uid:fc56821c-5139-4749-84ae-639dc6e9b07f,Namespace:calico-system,Attempt:0,} returns sandbox id \"1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4\"" Jul 6 23:28:50.715816 systemd-networkd[1815]: calidc069c5f027: Link UP Jul 6 23:28:50.718654 systemd-networkd[1815]: calidc069c5f027: Gained carrier Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.510 [INFO][5216] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--116-k8s-coredns--7c65d6cfc9--hgfgz-eth0 coredns-7c65d6cfc9- kube-system d0e807e9-0b9a-42a8-89c1-7d48e397dd4f 850 0 2025-07-06 23:28:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-116 coredns-7c65d6cfc9-hgfgz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidc069c5f027 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hgfgz" WorkloadEndpoint="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--hgfgz-" Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.510 [INFO][5216] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hgfgz" WorkloadEndpoint="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--hgfgz-eth0" Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.634 [INFO][5253] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" HandleID="k8s-pod-network.d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" Workload="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--hgfgz-eth0" Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.634 [INFO][5253] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" HandleID="k8s-pod-network.d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" Workload="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--hgfgz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103930), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-116", "pod":"coredns-7c65d6cfc9-hgfgz", "timestamp":"2025-07-06 23:28:50.633021172 +0000 UTC"}, Hostname:"ip-172-31-26-116", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.634 [INFO][5253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.634 [INFO][5253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.634 [INFO][5253] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-116' Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.651 [INFO][5253] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" host="ip-172-31-26-116" Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.659 [INFO][5253] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-116" Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.671 [INFO][5253] ipam/ipam.go 511: Trying affinity for 192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.676 [INFO][5253] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.682 [INFO][5253] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.682 [INFO][5253] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" host="ip-172-31-26-116" Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.685 [INFO][5253] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.692 [INFO][5253] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" host="ip-172-31-26-116" Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.705 [INFO][5253] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.198/26] block=192.168.95.192/26 handle="k8s-pod-network.d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" host="ip-172-31-26-116" Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.706 [INFO][5253] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.198/26] handle="k8s-pod-network.d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" host="ip-172-31-26-116" Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.706 [INFO][5253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:28:50.786852 containerd[1930]: 2025-07-06 23:28:50.706 [INFO][5253] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.198/26] IPv6=[] ContainerID="d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" HandleID="k8s-pod-network.d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" Workload="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--hgfgz-eth0" Jul 6 23:28:50.788705 containerd[1930]: 2025-07-06 23:28:50.710 [INFO][5216] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hgfgz" WorkloadEndpoint="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--hgfgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--116-k8s-coredns--7c65d6cfc9--hgfgz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d0e807e9-0b9a-42a8-89c1-7d48e397dd4f", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-116", ContainerID:"", Pod:"coredns-7c65d6cfc9-hgfgz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc069c5f027", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:50.788705 containerd[1930]: 2025-07-06 23:28:50.711 [INFO][5216] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.198/32] ContainerID="d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hgfgz" WorkloadEndpoint="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--hgfgz-eth0" Jul 6 23:28:50.788705 containerd[1930]: 2025-07-06 23:28:50.711 [INFO][5216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc069c5f027 ContainerID="d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hgfgz" WorkloadEndpoint="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--hgfgz-eth0" Jul 6 23:28:50.788705 containerd[1930]: 2025-07-06 23:28:50.719 [INFO][5216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hgfgz" WorkloadEndpoint="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--hgfgz-eth0" Jul 6 23:28:50.788705 containerd[1930]: 2025-07-06 23:28:50.729 [INFO][5216] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hgfgz" WorkloadEndpoint="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--hgfgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--116-k8s-coredns--7c65d6cfc9--hgfgz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d0e807e9-0b9a-42a8-89c1-7d48e397dd4f", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-116", ContainerID:"d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e", Pod:"coredns-7c65d6cfc9-hgfgz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc069c5f027", MAC:"8e:4d:25:05:c1:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:50.788705 containerd[1930]: 2025-07-06 23:28:50.774 [INFO][5216] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hgfgz" WorkloadEndpoint="ip--172--31--26--116-k8s-coredns--7c65d6cfc9--hgfgz-eth0" Jul 6 23:28:50.798171 kubelet[3280]: I0706 23:28:50.797787 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-cccfb8b7f-lbtc5" podStartSLOduration=2.776654677 podStartE2EDuration="6.797764457s" podCreationTimestamp="2025-07-06 23:28:44 +0000 UTC" firstStartedPulling="2025-07-06 23:28:45.000898676 +0000 UTC m=+45.952166821" lastFinishedPulling="2025-07-06 23:28:49.022008468 +0000 UTC m=+49.973276601" observedRunningTime="2025-07-06 23:28:50.797624093 +0000 UTC m=+51.748892262" watchObservedRunningTime="2025-07-06 23:28:50.797764457 +0000 UTC m=+51.749032578" Jul 6 23:28:50.842292 kubelet[3280]: I0706 23:28:50.842179 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-w7q6j" podStartSLOduration=46.842129705 podStartE2EDuration="46.842129705s" podCreationTimestamp="2025-07-06 23:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:50.840986837 +0000 UTC m=+51.792254994" watchObservedRunningTime="2025-07-06 23:28:50.842129705 +0000 UTC m=+51.793397838" Jul 6 23:28:50.852870 containerd[1930]: time="2025-07-06T23:28:50.852798797Z" level=info msg="connecting to shim d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e" address="unix:///run/containerd/s/01dac1429e549b2f2849f7b6e8eb2daad681c41514f6816c8bf21bf42697117b" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:50.941546 systemd[1]: Started cri-containerd-d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e.scope - libcontainer container d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e. Jul 6 23:28:50.965363 systemd-networkd[1815]: caliab503c460e1: Gained IPv6LL Jul 6 23:28:51.068767 containerd[1930]: time="2025-07-06T23:28:51.068622542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hgfgz,Uid:d0e807e9-0b9a-42a8-89c1-7d48e397dd4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e\"" Jul 6 23:28:51.080560 containerd[1930]: time="2025-07-06T23:28:51.080450918Z" level=info msg="CreateContainer within sandbox \"d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:28:51.107160 containerd[1930]: time="2025-07-06T23:28:51.106217846Z" level=info msg="Container edfbac6c693450679257be319f5d3317013ad5d999c8b8987997e5e7a9840d88: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:51.135885 containerd[1930]: time="2025-07-06T23:28:51.135787118Z" level=info msg="CreateContainer within sandbox \"d655a033c416caefdf23ddea69d19b36ca506d41af8be3617c8f9b780570a64e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"edfbac6c693450679257be319f5d3317013ad5d999c8b8987997e5e7a9840d88\"" Jul 6 23:28:51.141389 containerd[1930]: time="2025-07-06T23:28:51.141321698Z" level=info msg="StartContainer for \"edfbac6c693450679257be319f5d3317013ad5d999c8b8987997e5e7a9840d88\"" Jul 6 23:28:51.146105 containerd[1930]: time="2025-07-06T23:28:51.145961966Z" level=info msg="connecting to shim edfbac6c693450679257be319f5d3317013ad5d999c8b8987997e5e7a9840d88" address="unix:///run/containerd/s/01dac1429e549b2f2849f7b6e8eb2daad681c41514f6816c8bf21bf42697117b" protocol=ttrpc version=3 Jul 6 23:28:51.238381 systemd[1]: Started cri-containerd-edfbac6c693450679257be319f5d3317013ad5d999c8b8987997e5e7a9840d88.scope - libcontainer container edfbac6c693450679257be319f5d3317013ad5d999c8b8987997e5e7a9840d88. Jul 6 23:28:51.350348 systemd-networkd[1815]: calia2a334577ef: Gained IPv6LL Jul 6 23:28:51.413432 containerd[1930]: time="2025-07-06T23:28:51.413356084Z" level=info msg="StartContainer for \"edfbac6c693450679257be319f5d3317013ad5d999c8b8987997e5e7a9840d88\" returns successfully" Jul 6 23:28:51.837948 kubelet[3280]: I0706 23:28:51.837857 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hgfgz" podStartSLOduration=47.837834858 podStartE2EDuration="47.837834858s" podCreationTimestamp="2025-07-06 23:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:51.810755502 +0000 UTC m=+52.762023659" watchObservedRunningTime="2025-07-06 23:28:51.837834858 +0000 UTC m=+52.789102991" Jul 6 23:28:52.307872 containerd[1930]: time="2025-07-06T23:28:52.307711684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-bsbzl,Uid:1d85a67a-24d6-4a23-b48c-0fe13a5fb096,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:52.309385 containerd[1930]: time="2025-07-06T23:28:52.309191356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7wc5,Uid:c1099aef-6f47-4c54-92f1-abdaae830d6d,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:52.692545 systemd-networkd[1815]: calidc069c5f027: Gained IPv6LL Jul 6 23:28:52.804923 systemd-networkd[1815]: cali98bd7ae1ec6: Link UP Jul 6 23:28:52.808374 systemd-networkd[1815]: cali98bd7ae1ec6: Gained carrier Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.586 [INFO][5379] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--116-k8s-csi--node--driver--h7wc5-eth0 csi-node-driver- calico-system c1099aef-6f47-4c54-92f1-abdaae830d6d 741 0 2025-07-06 23:28:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-26-116 csi-node-driver-h7wc5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali98bd7ae1ec6 [] [] }} ContainerID="99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" Namespace="calico-system" Pod="csi-node-driver-h7wc5" WorkloadEndpoint="ip--172--31--26--116-k8s-csi--node--driver--h7wc5-" Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.586 [INFO][5379] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" Namespace="calico-system" Pod="csi-node-driver-h7wc5" WorkloadEndpoint="ip--172--31--26--116-k8s-csi--node--driver--h7wc5-eth0" Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.684 [INFO][5404] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" HandleID="k8s-pod-network.99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" Workload="ip--172--31--26--116-k8s-csi--node--driver--h7wc5-eth0" Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.685 [INFO][5404] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" HandleID="k8s-pod-network.99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" Workload="ip--172--31--26--116-k8s-csi--node--driver--h7wc5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d990), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-116", "pod":"csi-node-driver-h7wc5", "timestamp":"2025-07-06 23:28:52.68492901 +0000 UTC"}, Hostname:"ip-172-31-26-116", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.685 [INFO][5404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.685 [INFO][5404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.685 [INFO][5404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-116' Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.711 [INFO][5404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" host="ip-172-31-26-116" Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.720 [INFO][5404] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-116" Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.736 [INFO][5404] ipam/ipam.go 511: Trying affinity for 192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.743 [INFO][5404] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.748 [INFO][5404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.749 [INFO][5404] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" host="ip-172-31-26-116" Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.755 [INFO][5404] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1 Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.763 [INFO][5404] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" host="ip-172-31-26-116" Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.780 [INFO][5404] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.199/26] block=192.168.95.192/26 handle="k8s-pod-network.99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" host="ip-172-31-26-116" Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.780 [INFO][5404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.199/26] handle="k8s-pod-network.99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" host="ip-172-31-26-116" Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.780 [INFO][5404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:28:52.857172 containerd[1930]: 2025-07-06 23:28:52.781 [INFO][5404] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.199/26] IPv6=[] ContainerID="99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" HandleID="k8s-pod-network.99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" Workload="ip--172--31--26--116-k8s-csi--node--driver--h7wc5-eth0" Jul 6 23:28:52.862099 containerd[1930]: 2025-07-06 23:28:52.793 [INFO][5379] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" Namespace="calico-system" Pod="csi-node-driver-h7wc5" WorkloadEndpoint="ip--172--31--26--116-k8s-csi--node--driver--h7wc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--116-k8s-csi--node--driver--h7wc5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c1099aef-6f47-4c54-92f1-abdaae830d6d", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-116", ContainerID:"", Pod:"csi-node-driver-h7wc5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali98bd7ae1ec6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:52.862099 containerd[1930]: 2025-07-06 23:28:52.794 [INFO][5379] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.199/32] ContainerID="99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" Namespace="calico-system" Pod="csi-node-driver-h7wc5" WorkloadEndpoint="ip--172--31--26--116-k8s-csi--node--driver--h7wc5-eth0" Jul 6 23:28:52.862099 containerd[1930]: 2025-07-06 23:28:52.796 [INFO][5379] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98bd7ae1ec6 ContainerID="99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" Namespace="calico-system" Pod="csi-node-driver-h7wc5" WorkloadEndpoint="ip--172--31--26--116-k8s-csi--node--driver--h7wc5-eth0" Jul 6 23:28:52.862099 containerd[1930]: 2025-07-06 23:28:52.812 [INFO][5379] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" Namespace="calico-system" Pod="csi-node-driver-h7wc5" WorkloadEndpoint="ip--172--31--26--116-k8s-csi--node--driver--h7wc5-eth0" Jul 6 23:28:52.862099 containerd[1930]: 2025-07-06 23:28:52.815 [INFO][5379] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" Namespace="calico-system" Pod="csi-node-driver-h7wc5" WorkloadEndpoint="ip--172--31--26--116-k8s-csi--node--driver--h7wc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--116-k8s-csi--node--driver--h7wc5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c1099aef-6f47-4c54-92f1-abdaae830d6d", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-116", ContainerID:"99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1", Pod:"csi-node-driver-h7wc5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali98bd7ae1ec6", MAC:"fa:7c:f1:41:ce:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:52.862099 containerd[1930]: 2025-07-06 23:28:52.848 [INFO][5379] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" Namespace="calico-system" Pod="csi-node-driver-h7wc5" WorkloadEndpoint="ip--172--31--26--116-k8s-csi--node--driver--h7wc5-eth0" Jul 6 23:28:52.954332 systemd-networkd[1815]: calia1feabb54a3: Link UP Jul 6 23:28:52.958465 systemd-networkd[1815]: calia1feabb54a3: Gained carrier Jul 6 23:28:53.004504 containerd[1930]: time="2025-07-06T23:28:53.004365928Z" level=info msg="connecting to shim 99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1" address="unix:///run/containerd/s/c058a551c181d5ff3a97c57069e6c2424281446b22323750cac67c1df932c79e" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.557 [INFO][5373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--116-k8s-goldmane--58fd7646b9--bsbzl-eth0 goldmane-58fd7646b9- calico-system 1d85a67a-24d6-4a23-b48c-0fe13a5fb096 847 0 2025-07-06 23:28:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-26-116 goldmane-58fd7646b9-bsbzl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia1feabb54a3 [] [] }} ContainerID="23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" Namespace="calico-system" Pod="goldmane-58fd7646b9-bsbzl" WorkloadEndpoint="ip--172--31--26--116-k8s-goldmane--58fd7646b9--bsbzl-" Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.557 [INFO][5373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" Namespace="calico-system" Pod="goldmane-58fd7646b9-bsbzl" WorkloadEndpoint="ip--172--31--26--116-k8s-goldmane--58fd7646b9--bsbzl-eth0" Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.709 [INFO][5398] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" HandleID="k8s-pod-network.23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" Workload="ip--172--31--26--116-k8s-goldmane--58fd7646b9--bsbzl-eth0" Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.709 [INFO][5398] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" HandleID="k8s-pod-network.23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" Workload="ip--172--31--26--116-k8s-goldmane--58fd7646b9--bsbzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000374a90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-116", "pod":"goldmane-58fd7646b9-bsbzl", "timestamp":"2025-07-06 23:28:52.70915407 +0000 UTC"}, Hostname:"ip-172-31-26-116", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.709 [INFO][5398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.780 [INFO][5398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.782 [INFO][5398] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-116' Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.820 [INFO][5398] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" host="ip-172-31-26-116" Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.848 [INFO][5398] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-116" Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.865 [INFO][5398] ipam/ipam.go 511: Trying affinity for 192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.870 [INFO][5398] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.880 [INFO][5398] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ip-172-31-26-116" Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.880 [INFO][5398] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" host="ip-172-31-26-116" Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.885 [INFO][5398] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33 Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.899 [INFO][5398] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" host="ip-172-31-26-116" Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.933 [INFO][5398] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.200/26] block=192.168.95.192/26 handle="k8s-pod-network.23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" host="ip-172-31-26-116" Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.933 [INFO][5398] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.200/26] handle="k8s-pod-network.23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" host="ip-172-31-26-116" Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.933 [INFO][5398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:28:53.040229 containerd[1930]: 2025-07-06 23:28:52.933 [INFO][5398] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.200/26] IPv6=[] ContainerID="23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" HandleID="k8s-pod-network.23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" Workload="ip--172--31--26--116-k8s-goldmane--58fd7646b9--bsbzl-eth0" Jul 6 23:28:53.045613 containerd[1930]: 2025-07-06 23:28:52.941 [INFO][5373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" Namespace="calico-system" Pod="goldmane-58fd7646b9-bsbzl" WorkloadEndpoint="ip--172--31--26--116-k8s-goldmane--58fd7646b9--bsbzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--116-k8s-goldmane--58fd7646b9--bsbzl-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"1d85a67a-24d6-4a23-b48c-0fe13a5fb096", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-116", ContainerID:"", Pod:"goldmane-58fd7646b9-bsbzl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia1feabb54a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:53.045613 containerd[1930]: 2025-07-06 23:28:52.942 [INFO][5373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.200/32] ContainerID="23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" Namespace="calico-system" Pod="goldmane-58fd7646b9-bsbzl" WorkloadEndpoint="ip--172--31--26--116-k8s-goldmane--58fd7646b9--bsbzl-eth0" Jul 6 23:28:53.045613 containerd[1930]: 2025-07-06 23:28:52.942 [INFO][5373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1feabb54a3 ContainerID="23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" Namespace="calico-system" Pod="goldmane-58fd7646b9-bsbzl" WorkloadEndpoint="ip--172--31--26--116-k8s-goldmane--58fd7646b9--bsbzl-eth0" Jul 6 23:28:53.045613 containerd[1930]: 2025-07-06 23:28:52.952 [INFO][5373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" Namespace="calico-system" Pod="goldmane-58fd7646b9-bsbzl" WorkloadEndpoint="ip--172--31--26--116-k8s-goldmane--58fd7646b9--bsbzl-eth0" Jul 6 23:28:53.045613 containerd[1930]: 2025-07-06 23:28:52.953 [INFO][5373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" Namespace="calico-system" Pod="goldmane-58fd7646b9-bsbzl" WorkloadEndpoint="ip--172--31--26--116-k8s-goldmane--58fd7646b9--bsbzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--116-k8s-goldmane--58fd7646b9--bsbzl-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"1d85a67a-24d6-4a23-b48c-0fe13a5fb096", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-116", ContainerID:"23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33", Pod:"goldmane-58fd7646b9-bsbzl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia1feabb54a3", MAC:"ea:b5:09:63:09:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:53.045613 containerd[1930]: 2025-07-06 23:28:53.020 [INFO][5373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" Namespace="calico-system" Pod="goldmane-58fd7646b9-bsbzl" WorkloadEndpoint="ip--172--31--26--116-k8s-goldmane--58fd7646b9--bsbzl-eth0" Jul 6 23:28:53.144945 containerd[1930]: time="2025-07-06T23:28:53.144603052Z" level=info msg="connecting to shim 23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33" address="unix:///run/containerd/s/437354fb97fe867959506b5e1f4eac28c39c61a4015cf6f4c7a2cf118e9d3bf1" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:53.209738 systemd[1]: Started cri-containerd-99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1.scope - libcontainer container 99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1. Jul 6 23:28:53.268386 systemd[1]: Started cri-containerd-23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33.scope - libcontainer container 23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33. Jul 6 23:28:53.352411 containerd[1930]: time="2025-07-06T23:28:53.352348961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7wc5,Uid:c1099aef-6f47-4c54-92f1-abdaae830d6d,Namespace:calico-system,Attempt:0,} returns sandbox id \"99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1\"" Jul 6 23:28:53.439967 containerd[1930]: time="2025-07-06T23:28:53.439831794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-bsbzl,Uid:1d85a67a-24d6-4a23-b48c-0fe13a5fb096,Namespace:calico-system,Attempt:0,} returns sandbox id \"23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33\"" Jul 6 23:28:54.677030 systemd-networkd[1815]: cali98bd7ae1ec6: Gained IPv6LL Jul 6 23:28:54.807011 containerd[1930]: time="2025-07-06T23:28:54.806626401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:54.809078 containerd[1930]: time="2025-07-06T23:28:54.808980609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 6 23:28:54.811511 containerd[1930]: time="2025-07-06T23:28:54.811436121Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:54.816937 containerd[1930]: time="2025-07-06T23:28:54.816848505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:54.818528 containerd[1930]: time="2025-07-06T23:28:54.818347137Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 4.859131465s" Jul 6 23:28:54.818528 containerd[1930]: time="2025-07-06T23:28:54.818402529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:28:54.821515 containerd[1930]: time="2025-07-06T23:28:54.820274829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:28:54.825723 containerd[1930]: time="2025-07-06T23:28:54.825408717Z" level=info msg="CreateContainer within sandbox \"1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:28:54.843537 containerd[1930]: time="2025-07-06T23:28:54.843463917Z" level=info msg="Container dab8a85740c7388b6f4581078560dd3a5da170cc441dc476a690f2caf45a5555: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:54.858459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1826953105.mount: Deactivated successfully. Jul 6 23:28:54.866577 containerd[1930]: time="2025-07-06T23:28:54.866496237Z" level=info msg="CreateContainer within sandbox \"1b4f6a7b7c00d55e6f9eb9584f17252eeabca83cef78a9992d057033795b3d1c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dab8a85740c7388b6f4581078560dd3a5da170cc441dc476a690f2caf45a5555\"" Jul 6 23:28:54.868673 systemd-networkd[1815]: calia1feabb54a3: Gained IPv6LL Jul 6 23:28:54.870438 containerd[1930]: time="2025-07-06T23:28:54.870274257Z" level=info msg="StartContainer for \"dab8a85740c7388b6f4581078560dd3a5da170cc441dc476a690f2caf45a5555\"" Jul 6 23:28:54.874940 containerd[1930]: time="2025-07-06T23:28:54.874872729Z" level=info msg="connecting to shim dab8a85740c7388b6f4581078560dd3a5da170cc441dc476a690f2caf45a5555" address="unix:///run/containerd/s/e93e6a4019c561dc4f3d02178277b7e465025f5aaafc7bbbe7848a11d6e90688" protocol=ttrpc version=3 Jul 6 23:28:54.914360 systemd[1]: Started cri-containerd-dab8a85740c7388b6f4581078560dd3a5da170cc441dc476a690f2caf45a5555.scope - libcontainer container dab8a85740c7388b6f4581078560dd3a5da170cc441dc476a690f2caf45a5555. Jul 6 23:28:55.011156 containerd[1930]: time="2025-07-06T23:28:55.010936914Z" level=info msg="StartContainer for \"dab8a85740c7388b6f4581078560dd3a5da170cc441dc476a690f2caf45a5555\" returns successfully" Jul 6 23:28:55.169761 containerd[1930]: time="2025-07-06T23:28:55.169688898Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:55.173543 containerd[1930]: time="2025-07-06T23:28:55.173453850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:28:55.176807 containerd[1930]: time="2025-07-06T23:28:55.176725638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 354.865345ms" Jul 6 23:28:55.176931 containerd[1930]: time="2025-07-06T23:28:55.176804466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:28:55.178825 containerd[1930]: time="2025-07-06T23:28:55.178722126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:28:55.181759 containerd[1930]: time="2025-07-06T23:28:55.181692438Z" level=info msg="CreateContainer within sandbox \"b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:28:55.204414 containerd[1930]: time="2025-07-06T23:28:55.204345042Z" level=info msg="Container 26ed09c66bdc382b3e86668277b7658a39814ae38978cee37d830ff0c07cee36: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:55.212713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2972992994.mount: Deactivated successfully. Jul 6 23:28:55.233064 containerd[1930]: time="2025-07-06T23:28:55.231966511Z" level=info msg="CreateContainer within sandbox \"b6db2ad46b0161d1073abf9c4117860783a9a35b9f30a3a41a52f1b892541544\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"26ed09c66bdc382b3e86668277b7658a39814ae38978cee37d830ff0c07cee36\"" Jul 6 23:28:55.234436 containerd[1930]: time="2025-07-06T23:28:55.234290215Z" level=info msg="StartContainer for \"26ed09c66bdc382b3e86668277b7658a39814ae38978cee37d830ff0c07cee36\"" Jul 6 23:28:55.240056 containerd[1930]: time="2025-07-06T23:28:55.238729315Z" level=info msg="connecting to shim 26ed09c66bdc382b3e86668277b7658a39814ae38978cee37d830ff0c07cee36" address="unix:///run/containerd/s/349e68542d7b93f9955137bbb7eb18c0cb4fa128ce020ea1b49e746fa884d0e4" protocol=ttrpc version=3 Jul 6 23:28:55.282419 systemd[1]: Started cri-containerd-26ed09c66bdc382b3e86668277b7658a39814ae38978cee37d830ff0c07cee36.scope - libcontainer container 26ed09c66bdc382b3e86668277b7658a39814ae38978cee37d830ff0c07cee36. Jul 6 23:28:55.405184 containerd[1930]: time="2025-07-06T23:28:55.405130987Z" level=info msg="StartContainer for \"26ed09c66bdc382b3e86668277b7658a39814ae38978cee37d830ff0c07cee36\" returns successfully" Jul 6 23:28:55.886080 kubelet[3280]: I0706 23:28:55.885599 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7799487779-zg2ph" podStartSLOduration=34.018438613 podStartE2EDuration="38.885573598s" podCreationTimestamp="2025-07-06 23:28:17 +0000 UTC" firstStartedPulling="2025-07-06 23:28:49.952749724 +0000 UTC m=+50.904017857" lastFinishedPulling="2025-07-06 23:28:54.819884625 +0000 UTC m=+55.771152842" observedRunningTime="2025-07-06 23:28:55.853483162 +0000 UTC m=+56.804751331" watchObservedRunningTime="2025-07-06 23:28:55.885573598 +0000 UTC m=+56.836841731" Jul 6 23:28:55.889362 kubelet[3280]: I0706 23:28:55.889207 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7799487779-lslkv" podStartSLOduration=33.784067897 podStartE2EDuration="38.889183846s" podCreationTimestamp="2025-07-06 23:28:17 +0000 UTC" firstStartedPulling="2025-07-06 23:28:50.072688393 +0000 UTC m=+51.023956514" lastFinishedPulling="2025-07-06 23:28:55.177804282 +0000 UTC m=+56.129072463" observedRunningTime="2025-07-06 23:28:55.88442647 +0000 UTC m=+56.835694699" watchObservedRunningTime="2025-07-06 23:28:55.889183846 +0000 UTC m=+56.840451991" Jul 6 23:28:56.846071 kubelet[3280]: I0706 23:28:56.844324 3280 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:28:56.846071 kubelet[3280]: I0706 23:28:56.844871 3280 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:28:57.824191 ntpd[1892]: Listen normally on 7 vxlan.calico 192.168.95.192:123 Jul 6 23:28:57.825555 ntpd[1892]: 6 Jul 23:28:57 ntpd[1892]: Listen normally on 7 vxlan.calico 192.168.95.192:123 Jul 6 23:28:57.825555 ntpd[1892]: 6 Jul 23:28:57 ntpd[1892]: Listen normally on 8 calic5513cb8586 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 6 23:28:57.825555 ntpd[1892]: 6 Jul 23:28:57 ntpd[1892]: Listen normally on 9 vxlan.calico [fe80::6454:33ff:feaf:381%5]:123 Jul 6 23:28:57.825555 ntpd[1892]: 6 Jul 23:28:57 ntpd[1892]: Listen normally on 10 calib085186d536 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 6 23:28:57.825555 ntpd[1892]: 6 Jul 23:28:57 ntpd[1892]: Listen normally on 11 caliab503c460e1 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 6 23:28:57.825555 ntpd[1892]: 6 Jul 23:28:57 ntpd[1892]: Listen normally on 12 cali3f8a8f71f9c [fe80::ecee:eeff:feee:eeee%10]:123 Jul 6 23:28:57.825555 ntpd[1892]: 6 Jul 23:28:57 ntpd[1892]: Listen normally on 13 calia2a334577ef [fe80::ecee:eeff:feee:eeee%11]:123 Jul 6 23:28:57.825555 ntpd[1892]: 6 Jul 23:28:57 ntpd[1892]: Listen normally on 14 calidc069c5f027 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 6 23:28:57.825555 ntpd[1892]: 6 Jul 23:28:57 ntpd[1892]: Listen normally on 15 cali98bd7ae1ec6 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 6 23:28:57.825555 ntpd[1892]: 6 Jul 23:28:57 ntpd[1892]: Listen normally on 16 calia1feabb54a3 [fe80::ecee:eeff:feee:eeee%14]:123 Jul 6 23:28:57.824309 ntpd[1892]: Listen normally on 8 calic5513cb8586 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 6 23:28:57.824407 ntpd[1892]: Listen normally on 9 vxlan.calico [fe80::6454:33ff:feaf:381%5]:123 Jul 6 23:28:57.824477 ntpd[1892]: Listen normally on 10 calib085186d536 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 6 23:28:57.824543 ntpd[1892]: Listen normally on 11 caliab503c460e1 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 6 23:28:57.824607 ntpd[1892]: Listen normally on 12 cali3f8a8f71f9c [fe80::ecee:eeff:feee:eeee%10]:123 Jul 6 23:28:57.824671 ntpd[1892]: Listen normally on 13 calia2a334577ef [fe80::ecee:eeff:feee:eeee%11]:123 Jul 6 23:28:57.824735 ntpd[1892]: Listen normally on 14 calidc069c5f027 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 6 23:28:57.824802 ntpd[1892]: Listen normally on 15 cali98bd7ae1ec6 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 6 23:28:57.824863 ntpd[1892]: Listen normally on 16 calia1feabb54a3 [fe80::ecee:eeff:feee:eeee%14]:123 Jul 6 23:29:01.734109 containerd[1930]: time="2025-07-06T23:29:01.733388319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:01.735591 containerd[1930]: time="2025-07-06T23:29:01.735446979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 6 23:29:01.738265 containerd[1930]: time="2025-07-06T23:29:01.738177579Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:01.744522 containerd[1930]: time="2025-07-06T23:29:01.744369099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:01.747979 containerd[1930]: time="2025-07-06T23:29:01.747899571Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 6.568443957s" Jul 6 23:29:01.748158 containerd[1930]: time="2025-07-06T23:29:01.748001583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 6 23:29:01.750651 containerd[1930]: time="2025-07-06T23:29:01.750592707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:29:01.788360 containerd[1930]: time="2025-07-06T23:29:01.788312835Z" level=info msg="CreateContainer within sandbox \"1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:29:01.843071 containerd[1930]: time="2025-07-06T23:29:01.842335071Z" level=info msg="Container a21a5a6592b7c612800886ec77a1937b48864064bd31061f199d002d521137f2: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:01.887989 containerd[1930]: time="2025-07-06T23:29:01.887842240Z" level=info msg="CreateContainer within sandbox \"1a2766f9211da48351fc10ac04134bd6501f741a6ceb46437e6579fe70cbe8b4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a21a5a6592b7c612800886ec77a1937b48864064bd31061f199d002d521137f2\"" Jul 6 23:29:01.889809 containerd[1930]: time="2025-07-06T23:29:01.889616224Z" level=info msg="StartContainer for \"a21a5a6592b7c612800886ec77a1937b48864064bd31061f199d002d521137f2\"" Jul 6 23:29:01.892926 containerd[1930]: time="2025-07-06T23:29:01.892860196Z" level=info msg="connecting to shim a21a5a6592b7c612800886ec77a1937b48864064bd31061f199d002d521137f2" address="unix:///run/containerd/s/82c436400fce71d2677b4924dccc82ee9b72cabad5abdf274f501ace22faa91b" protocol=ttrpc version=3 Jul 6 23:29:01.951717 systemd[1]: Started cri-containerd-a21a5a6592b7c612800886ec77a1937b48864064bd31061f199d002d521137f2.scope - libcontainer container a21a5a6592b7c612800886ec77a1937b48864064bd31061f199d002d521137f2. Jul 6 23:29:02.173655 containerd[1930]: time="2025-07-06T23:29:02.173517757Z" level=info msg="StartContainer for \"a21a5a6592b7c612800886ec77a1937b48864064bd31061f199d002d521137f2\" returns successfully" Jul 6 23:29:02.414129 systemd[1]: Started sshd@7-172.31.26.116:22-139.178.89.65:41760.service - OpenSSH per-connection server daemon (139.178.89.65:41760). Jul 6 23:29:02.651913 sshd[5679]: Accepted publickey for core from 139.178.89.65 port 41760 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:02.656102 sshd-session[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:02.667491 systemd-logind[1901]: New session 8 of user core. Jul 6 23:29:02.674521 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:29:02.948009 kubelet[3280]: I0706 23:29:02.947422 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64bbcddc7c-s49wp" podStartSLOduration=24.803613222 podStartE2EDuration="35.947398613s" podCreationTimestamp="2025-07-06 23:28:27 +0000 UTC" firstStartedPulling="2025-07-06 23:28:50.606416368 +0000 UTC m=+51.557684501" lastFinishedPulling="2025-07-06 23:29:01.750201675 +0000 UTC m=+62.701469892" observedRunningTime="2025-07-06 23:29:02.943013897 +0000 UTC m=+63.894282054" watchObservedRunningTime="2025-07-06 23:29:02.947398613 +0000 UTC m=+63.898666746" Jul 6 23:29:03.012516 containerd[1930]: time="2025-07-06T23:29:03.012444769Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a21a5a6592b7c612800886ec77a1937b48864064bd31061f199d002d521137f2\" id:\"4f7432d53f70749f79d7d7d643cc366923bc380ded97f094e6b1e436274cb8cd\" pid:5703 exited_at:{seconds:1751844543 nanos:11839669}" Jul 6 23:29:03.032085 sshd[5681]: Connection closed by 139.178.89.65 port 41760 Jul 6 23:29:03.032304 sshd-session[5679]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:03.044336 systemd[1]: sshd@7-172.31.26.116:22-139.178.89.65:41760.service: Deactivated successfully. Jul 6 23:29:03.045380 systemd-logind[1901]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:29:03.053750 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:29:03.064188 systemd-logind[1901]: Removed session 8. Jul 6 23:29:04.168683 containerd[1930]: time="2025-07-06T23:29:04.168604299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:04.171635 containerd[1930]: time="2025-07-06T23:29:04.171556035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 6 23:29:04.174116 containerd[1930]: time="2025-07-06T23:29:04.174012147Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:04.179167 containerd[1930]: time="2025-07-06T23:29:04.179036919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:04.180605 containerd[1930]: time="2025-07-06T23:29:04.180392511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 2.429739276s" Jul 6 23:29:04.180605 containerd[1930]: time="2025-07-06T23:29:04.180450531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 6 23:29:04.182894 containerd[1930]: time="2025-07-06T23:29:04.182563875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:29:04.187646 containerd[1930]: time="2025-07-06T23:29:04.187328055Z" level=info msg="CreateContainer within sandbox \"99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:29:04.207519 containerd[1930]: time="2025-07-06T23:29:04.207452427Z" level=info msg="Container fca2ad1ff9978eccfdb92e04dfb32f36bab9ba7ec38dd3b3568fd4ee732cae77: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:04.231894 containerd[1930]: time="2025-07-06T23:29:04.231779523Z" level=info msg="CreateContainer within sandbox \"99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fca2ad1ff9978eccfdb92e04dfb32f36bab9ba7ec38dd3b3568fd4ee732cae77\"" Jul 6 23:29:04.235236 containerd[1930]: time="2025-07-06T23:29:04.235003575Z" level=info msg="StartContainer for \"fca2ad1ff9978eccfdb92e04dfb32f36bab9ba7ec38dd3b3568fd4ee732cae77\"" Jul 6 23:29:04.249832 containerd[1930]: time="2025-07-06T23:29:04.249691155Z" level=info msg="connecting to shim fca2ad1ff9978eccfdb92e04dfb32f36bab9ba7ec38dd3b3568fd4ee732cae77" address="unix:///run/containerd/s/c058a551c181d5ff3a97c57069e6c2424281446b22323750cac67c1df932c79e" protocol=ttrpc version=3 Jul 6 23:29:04.310368 systemd[1]: Started cri-containerd-fca2ad1ff9978eccfdb92e04dfb32f36bab9ba7ec38dd3b3568fd4ee732cae77.scope - libcontainer container fca2ad1ff9978eccfdb92e04dfb32f36bab9ba7ec38dd3b3568fd4ee732cae77. Jul 6 23:29:04.394257 containerd[1930]: time="2025-07-06T23:29:04.394164988Z" level=info msg="StartContainer for \"fca2ad1ff9978eccfdb92e04dfb32f36bab9ba7ec38dd3b3568fd4ee732cae77\" returns successfully" Jul 6 23:29:05.719945 containerd[1930]: time="2025-07-06T23:29:05.719881015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c818b19960c2055ec2ff127c97d0e0366f176776c3cd4cd722ba2e71f7118ab9\" id:\"00407003bca51949a75ea51cac513f101c437a3f35aecab14310ce7876f2fb9c\" pid:5771 exited_at:{seconds:1751844545 nanos:719485483}" Jul 6 23:29:06.203599 containerd[1930]: time="2025-07-06T23:29:06.203495009Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a21a5a6592b7c612800886ec77a1937b48864064bd31061f199d002d521137f2\" id:\"d32ec1fbd9ee7e08714e383eca93cc7b5b6763058d49a1c3aefadd35799303a5\" pid:5796 exited_at:{seconds:1751844546 nanos:201247865}" Jul 6 23:29:06.420508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1864453828.mount: Deactivated successfully. Jul 6 23:29:07.173282 containerd[1930]: time="2025-07-06T23:29:07.173226930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:07.175362 containerd[1930]: time="2025-07-06T23:29:07.175270722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 6 23:29:07.178134 containerd[1930]: time="2025-07-06T23:29:07.178006482Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:07.182903 containerd[1930]: time="2025-07-06T23:29:07.182793930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:07.184198 containerd[1930]: time="2025-07-06T23:29:07.184137570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.001520019s" Jul 6 23:29:07.184873 containerd[1930]: time="2025-07-06T23:29:07.184196658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 6 23:29:07.187267 containerd[1930]: time="2025-07-06T23:29:07.186610530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:29:07.196117 containerd[1930]: time="2025-07-06T23:29:07.195228186Z" level=info msg="CreateContainer within sandbox \"23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:29:07.216071 containerd[1930]: time="2025-07-06T23:29:07.212794062Z" level=info msg="Container 84c43d4cdf3c129dce7c4e674daab559df356760aa7e8f696a55dd1efebfe90f: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:07.237439 containerd[1930]: time="2025-07-06T23:29:07.237350142Z" level=info msg="CreateContainer within sandbox \"23a136fda65c9a42afce7bfb929fc11f328280c357bfc1cbc214a3f3713b9e33\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"84c43d4cdf3c129dce7c4e674daab559df356760aa7e8f696a55dd1efebfe90f\"" Jul 6 23:29:07.239026 containerd[1930]: time="2025-07-06T23:29:07.238966098Z" level=info msg="StartContainer for \"84c43d4cdf3c129dce7c4e674daab559df356760aa7e8f696a55dd1efebfe90f\"" Jul 6 23:29:07.241534 containerd[1930]: time="2025-07-06T23:29:07.241468326Z" level=info msg="connecting to shim 84c43d4cdf3c129dce7c4e674daab559df356760aa7e8f696a55dd1efebfe90f" address="unix:///run/containerd/s/437354fb97fe867959506b5e1f4eac28c39c61a4015cf6f4c7a2cf118e9d3bf1" protocol=ttrpc version=3 Jul 6 23:29:07.285340 systemd[1]: Started cri-containerd-84c43d4cdf3c129dce7c4e674daab559df356760aa7e8f696a55dd1efebfe90f.scope - libcontainer container 84c43d4cdf3c129dce7c4e674daab559df356760aa7e8f696a55dd1efebfe90f. Jul 6 23:29:07.381949 containerd[1930]: time="2025-07-06T23:29:07.381800191Z" level=info msg="StartContainer for \"84c43d4cdf3c129dce7c4e674daab559df356760aa7e8f696a55dd1efebfe90f\" returns successfully" Jul 6 23:29:07.950758 kubelet[3280]: I0706 23:29:07.950625 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-bsbzl" podStartSLOduration=28.207818058 podStartE2EDuration="41.950468362s" podCreationTimestamp="2025-07-06 23:28:26 +0000 UTC" firstStartedPulling="2025-07-06 23:28:53.443606778 +0000 UTC m=+54.394874911" lastFinishedPulling="2025-07-06 23:29:07.186257082 +0000 UTC m=+68.137525215" observedRunningTime="2025-07-06 23:29:07.948391366 +0000 UTC m=+68.899659523" watchObservedRunningTime="2025-07-06 23:29:07.950468362 +0000 UTC m=+68.901736519" Jul 6 23:29:08.072304 systemd[1]: Started sshd@8-172.31.26.116:22-139.178.89.65:41772.service - OpenSSH per-connection server daemon (139.178.89.65:41772). Jul 6 23:29:08.195641 containerd[1930]: time="2025-07-06T23:29:08.195563095Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84c43d4cdf3c129dce7c4e674daab559df356760aa7e8f696a55dd1efebfe90f\" id:\"623d9b3b2dc467b8eb987decdb77167803a72966ae5f6aaf3ad34e4a773ad373\" pid:5866 exit_status:1 exited_at:{seconds:1751844548 nanos:194857507}" Jul 6 23:29:08.292602 sshd[5877]: Accepted publickey for core from 139.178.89.65 port 41772 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:08.295746 sshd-session[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:08.305444 systemd-logind[1901]: New session 9 of user core. Jul 6 23:29:08.314345 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:29:08.626649 sshd[5881]: Connection closed by 139.178.89.65 port 41772 Jul 6 23:29:08.627448 sshd-session[5877]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:08.637351 systemd[1]: sshd@8-172.31.26.116:22-139.178.89.65:41772.service: Deactivated successfully. Jul 6 23:29:08.643723 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:29:08.652530 systemd-logind[1901]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:29:08.655345 systemd-logind[1901]: Removed session 9. Jul 6 23:29:08.812586 containerd[1930]: time="2025-07-06T23:29:08.812497726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:08.814772 containerd[1930]: time="2025-07-06T23:29:08.814683658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 6 23:29:08.818265 containerd[1930]: time="2025-07-06T23:29:08.818172538Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:08.825430 containerd[1930]: time="2025-07-06T23:29:08.825352894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:08.828308 containerd[1930]: time="2025-07-06T23:29:08.828223222Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.641547976s" Jul 6 23:29:08.828888 containerd[1930]: time="2025-07-06T23:29:08.828851854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 6 23:29:08.834648 containerd[1930]: time="2025-07-06T23:29:08.833577622Z" level=info msg="CreateContainer within sandbox \"99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:29:08.860981 containerd[1930]: time="2025-07-06T23:29:08.859345978Z" level=info msg="Container 0044e33291ab0a20b18fa0b93d8fc88ae53dc7d289251b41ae416f9cac0b6d67: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:08.884249 containerd[1930]: time="2025-07-06T23:29:08.883957522Z" level=info msg="CreateContainer within sandbox \"99667a96268b3b7e19f5b97538a584809bec5cda93a17ca58823808d9aeb27d1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0044e33291ab0a20b18fa0b93d8fc88ae53dc7d289251b41ae416f9cac0b6d67\"" Jul 6 23:29:08.885769 containerd[1930]: time="2025-07-06T23:29:08.885580030Z" level=info msg="StartContainer for \"0044e33291ab0a20b18fa0b93d8fc88ae53dc7d289251b41ae416f9cac0b6d67\"" Jul 6 23:29:08.889161 containerd[1930]: time="2025-07-06T23:29:08.888863530Z" level=info msg="connecting to shim 0044e33291ab0a20b18fa0b93d8fc88ae53dc7d289251b41ae416f9cac0b6d67" address="unix:///run/containerd/s/c058a551c181d5ff3a97c57069e6c2424281446b22323750cac67c1df932c79e" protocol=ttrpc version=3 Jul 6 23:29:08.930619 systemd[1]: Started cri-containerd-0044e33291ab0a20b18fa0b93d8fc88ae53dc7d289251b41ae416f9cac0b6d67.scope - libcontainer container 0044e33291ab0a20b18fa0b93d8fc88ae53dc7d289251b41ae416f9cac0b6d67. Jul 6 23:29:09.059956 containerd[1930]: time="2025-07-06T23:29:09.059502391Z" level=info msg="StartContainer for \"0044e33291ab0a20b18fa0b93d8fc88ae53dc7d289251b41ae416f9cac0b6d67\" returns successfully" Jul 6 23:29:09.140561 containerd[1930]: time="2025-07-06T23:29:09.139891604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84c43d4cdf3c129dce7c4e674daab559df356760aa7e8f696a55dd1efebfe90f\" id:\"fa321b1912253fa0f4c13c201f6529c54bb541ab32c66f367c697863577aca51\" pid:5928 exit_status:1 exited_at:{seconds:1751844549 nanos:139449404}" Jul 6 23:29:09.529546 kubelet[3280]: I0706 23:29:09.528847 3280 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:29:09.529546 kubelet[3280]: I0706 23:29:09.528907 3280 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:29:09.971296 kubelet[3280]: I0706 23:29:09.971080 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h7wc5" podStartSLOduration=27.496048027 podStartE2EDuration="42.971022936s" podCreationTimestamp="2025-07-06 23:28:27 +0000 UTC" firstStartedPulling="2025-07-06 23:28:53.355360145 +0000 UTC m=+54.306628278" lastFinishedPulling="2025-07-06 23:29:08.830335054 +0000 UTC m=+69.781603187" observedRunningTime="2025-07-06 23:29:09.968230644 +0000 UTC m=+70.919498801" watchObservedRunningTime="2025-07-06 23:29:09.971022936 +0000 UTC m=+70.922291069" Jul 6 23:29:13.669221 systemd[1]: Started sshd@9-172.31.26.116:22-139.178.89.65:50134.service - OpenSSH per-connection server daemon (139.178.89.65:50134). Jul 6 23:29:13.871009 sshd[5956]: Accepted publickey for core from 139.178.89.65 port 50134 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:13.874866 sshd-session[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:13.892095 systemd-logind[1901]: New session 10 of user core. Jul 6 23:29:13.898657 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:29:14.043246 containerd[1930]: time="2025-07-06T23:29:14.042928524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84c43d4cdf3c129dce7c4e674daab559df356760aa7e8f696a55dd1efebfe90f\" id:\"2d9410b9f1ac22de68160494d99f27bc16076bb72d415fd60aa1ff32273b3bd9\" pid:5970 exited_at:{seconds:1751844554 nanos:42411636}" Jul 6 23:29:14.176462 sshd[5976]: Connection closed by 139.178.89.65 port 50134 Jul 6 23:29:14.178128 sshd-session[5956]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:14.188390 systemd[1]: sshd@9-172.31.26.116:22-139.178.89.65:50134.service: Deactivated successfully. Jul 6 23:29:14.188773 systemd-logind[1901]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:29:14.193115 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:29:14.198359 systemd-logind[1901]: Removed session 10. Jul 6 23:29:14.215870 systemd[1]: Started sshd@10-172.31.26.116:22-139.178.89.65:50144.service - OpenSSH per-connection server daemon (139.178.89.65:50144). Jul 6 23:29:14.432445 sshd[5993]: Accepted publickey for core from 139.178.89.65 port 50144 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:14.435571 sshd-session[5993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:14.444101 systemd-logind[1901]: New session 11 of user core. Jul 6 23:29:14.457379 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:29:14.818123 sshd[5995]: Connection closed by 139.178.89.65 port 50144 Jul 6 23:29:14.819221 sshd-session[5993]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:14.830318 systemd[1]: sshd@10-172.31.26.116:22-139.178.89.65:50144.service: Deactivated successfully. Jul 6 23:29:14.839428 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:29:14.845406 systemd-logind[1901]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:29:14.874441 systemd[1]: Started sshd@11-172.31.26.116:22-139.178.89.65:50152.service - OpenSSH per-connection server daemon (139.178.89.65:50152). Jul 6 23:29:14.880475 systemd-logind[1901]: Removed session 11. Jul 6 23:29:14.931706 containerd[1930]: time="2025-07-06T23:29:14.931641280Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a21a5a6592b7c612800886ec77a1937b48864064bd31061f199d002d521137f2\" id:\"1db2a06ff0263acccc0828c8a8194a2b4f491eb5d74c15a1b19e3c5d6aea37f3\" pid:6014 exited_at:{seconds:1751844554 nanos:930704320}" Jul 6 23:29:15.090390 sshd[6023]: Accepted publickey for core from 139.178.89.65 port 50152 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:15.092570 sshd-session[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:15.102610 systemd-logind[1901]: New session 12 of user core. Jul 6 23:29:15.110329 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:29:15.373150 sshd[6028]: Connection closed by 139.178.89.65 port 50152 Jul 6 23:29:15.374390 sshd-session[6023]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:15.381495 systemd[1]: sshd@11-172.31.26.116:22-139.178.89.65:50152.service: Deactivated successfully. Jul 6 23:29:15.386405 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:29:15.388925 systemd-logind[1901]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:29:15.391797 systemd-logind[1901]: Removed session 12. Jul 6 23:29:16.866945 kubelet[3280]: I0706 23:29:16.866345 3280 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:29:20.420482 systemd[1]: Started sshd@12-172.31.26.116:22-139.178.89.65:37044.service - OpenSSH per-connection server daemon (139.178.89.65:37044). Jul 6 23:29:20.641100 sshd[6047]: Accepted publickey for core from 139.178.89.65 port 37044 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:20.643642 sshd-session[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:20.653162 systemd-logind[1901]: New session 13 of user core. Jul 6 23:29:20.657695 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:29:20.911499 sshd[6049]: Connection closed by 139.178.89.65 port 37044 Jul 6 23:29:20.912366 sshd-session[6047]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:20.919155 systemd[1]: sshd@12-172.31.26.116:22-139.178.89.65:37044.service: Deactivated successfully. Jul 6 23:29:20.923758 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:29:20.926724 systemd-logind[1901]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:29:20.930498 systemd-logind[1901]: Removed session 13. Jul 6 23:29:25.958735 systemd[1]: Started sshd@13-172.31.26.116:22-139.178.89.65:37052.service - OpenSSH per-connection server daemon (139.178.89.65:37052). Jul 6 23:29:26.165306 sshd[6063]: Accepted publickey for core from 139.178.89.65 port 37052 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:26.167953 sshd-session[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:26.177151 systemd-logind[1901]: New session 14 of user core. Jul 6 23:29:26.184290 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:29:26.442502 sshd[6065]: Connection closed by 139.178.89.65 port 37052 Jul 6 23:29:26.443380 sshd-session[6063]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:26.450467 systemd[1]: sshd@13-172.31.26.116:22-139.178.89.65:37052.service: Deactivated successfully. Jul 6 23:29:26.455531 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:29:26.457796 systemd-logind[1901]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:29:26.463417 systemd-logind[1901]: Removed session 14. Jul 6 23:29:30.482735 kubelet[3280]: I0706 23:29:30.482678 3280 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:29:31.482555 systemd[1]: Started sshd@14-172.31.26.116:22-139.178.89.65:53132.service - OpenSSH per-connection server daemon (139.178.89.65:53132). Jul 6 23:29:31.703769 sshd[6084]: Accepted publickey for core from 139.178.89.65 port 53132 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:31.706974 sshd-session[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:31.716896 systemd-logind[1901]: New session 15 of user core. Jul 6 23:29:31.721398 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:29:31.979728 sshd[6086]: Connection closed by 139.178.89.65 port 53132 Jul 6 23:29:31.981495 sshd-session[6084]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:31.991673 systemd[1]: sshd@14-172.31.26.116:22-139.178.89.65:53132.service: Deactivated successfully. Jul 6 23:29:31.997615 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:29:32.000024 systemd-logind[1901]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:29:32.005358 systemd-logind[1901]: Removed session 15. Jul 6 23:29:35.721490 containerd[1930]: time="2025-07-06T23:29:35.721370868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c818b19960c2055ec2ff127c97d0e0366f176776c3cd4cd722ba2e71f7118ab9\" id:\"c8dd60de968739021d6922dfe24334cfc035ead3102770b40f65c2f62213e783\" pid:6114 exited_at:{seconds:1751844575 nanos:720830844}" Jul 6 23:29:36.148154 containerd[1930]: time="2025-07-06T23:29:36.147547462Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a21a5a6592b7c612800886ec77a1937b48864064bd31061f199d002d521137f2\" id:\"598cf747c192a240a16fc8e0632b3262213a19a01d4a402650a6c593d24bdd87\" pid:6140 exited_at:{seconds:1751844576 nanos:146542162}" Jul 6 23:29:37.027258 systemd[1]: Started sshd@15-172.31.26.116:22-139.178.89.65:53144.service - OpenSSH per-connection server daemon (139.178.89.65:53144). Jul 6 23:29:37.253755 sshd[6151]: Accepted publickey for core from 139.178.89.65 port 53144 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:37.258341 sshd-session[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:37.270799 systemd-logind[1901]: New session 16 of user core. Jul 6 23:29:37.278361 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:29:37.592364 sshd[6153]: Connection closed by 139.178.89.65 port 53144 Jul 6 23:29:37.594485 sshd-session[6151]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:37.603940 systemd[1]: sshd@15-172.31.26.116:22-139.178.89.65:53144.service: Deactivated successfully. Jul 6 23:29:37.610022 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:29:37.614020 systemd-logind[1901]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:29:37.638477 systemd[1]: Started sshd@16-172.31.26.116:22-139.178.89.65:53152.service - OpenSSH per-connection server daemon (139.178.89.65:53152). Jul 6 23:29:37.641663 systemd-logind[1901]: Removed session 16. Jul 6 23:29:37.860862 containerd[1930]: time="2025-07-06T23:29:37.860542682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84c43d4cdf3c129dce7c4e674daab559df356760aa7e8f696a55dd1efebfe90f\" id:\"579ef68cacc0f4ec2aedd2e47f5bbd3c823162e0de3913d4cb42eb4b7774789b\" pid:6180 exited_at:{seconds:1751844577 nanos:859802174}" Jul 6 23:29:37.876096 sshd[6165]: Accepted publickey for core from 139.178.89.65 port 53152 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:37.878508 sshd-session[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:37.895187 systemd-logind[1901]: New session 17 of user core. Jul 6 23:29:37.903506 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:29:38.585448 sshd[6191]: Connection closed by 139.178.89.65 port 53152 Jul 6 23:29:38.585321 sshd-session[6165]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:38.598031 systemd[1]: sshd@16-172.31.26.116:22-139.178.89.65:53152.service: Deactivated successfully. Jul 6 23:29:38.604398 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:29:38.611674 systemd-logind[1901]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:29:38.634716 systemd[1]: Started sshd@17-172.31.26.116:22-139.178.89.65:53166.service - OpenSSH per-connection server daemon (139.178.89.65:53166). Jul 6 23:29:38.638831 systemd-logind[1901]: Removed session 17. Jul 6 23:29:38.863548 sshd[6203]: Accepted publickey for core from 139.178.89.65 port 53166 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:38.867142 sshd-session[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:38.880408 systemd-logind[1901]: New session 18 of user core. Jul 6 23:29:38.887719 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:29:43.109932 sshd[6205]: Connection closed by 139.178.89.65 port 53166 Jul 6 23:29:43.114190 sshd-session[6203]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:43.124906 systemd[1]: sshd@17-172.31.26.116:22-139.178.89.65:53166.service: Deactivated successfully. Jul 6 23:29:43.135239 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:29:43.138333 systemd[1]: session-18.scope: Consumed 1.081s CPU time, 82.3M memory peak. Jul 6 23:29:43.142403 systemd-logind[1901]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:29:43.196787 systemd[1]: Started sshd@18-172.31.26.116:22-139.178.89.65:35854.service - OpenSSH per-connection server daemon (139.178.89.65:35854). Jul 6 23:29:43.203430 systemd-logind[1901]: Removed session 18. Jul 6 23:29:43.443960 sshd[6221]: Accepted publickey for core from 139.178.89.65 port 35854 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:43.447366 sshd-session[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:43.461466 systemd-logind[1901]: New session 19 of user core. Jul 6 23:29:43.473397 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:29:44.152535 sshd[6225]: Connection closed by 139.178.89.65 port 35854 Jul 6 23:29:44.152985 sshd-session[6221]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:44.160853 systemd[1]: sshd@18-172.31.26.116:22-139.178.89.65:35854.service: Deactivated successfully. Jul 6 23:29:44.171501 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:29:44.175729 systemd-logind[1901]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:29:44.202337 systemd[1]: Started sshd@19-172.31.26.116:22-139.178.89.65:35868.service - OpenSSH per-connection server daemon (139.178.89.65:35868). Jul 6 23:29:44.205590 systemd-logind[1901]: Removed session 19. Jul 6 23:29:44.429411 sshd[6236]: Accepted publickey for core from 139.178.89.65 port 35868 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:44.433200 sshd-session[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:44.443149 systemd-logind[1901]: New session 20 of user core. Jul 6 23:29:44.450352 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:29:44.746292 sshd[6238]: Connection closed by 139.178.89.65 port 35868 Jul 6 23:29:44.748894 sshd-session[6236]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:44.761568 systemd[1]: sshd@19-172.31.26.116:22-139.178.89.65:35868.service: Deactivated successfully. Jul 6 23:29:44.769423 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:29:44.774375 systemd-logind[1901]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:29:44.782144 systemd-logind[1901]: Removed session 20. Jul 6 23:29:49.789614 systemd[1]: Started sshd@20-172.31.26.116:22-139.178.89.65:36220.service - OpenSSH per-connection server daemon (139.178.89.65:36220). Jul 6 23:29:50.003285 sshd[6251]: Accepted publickey for core from 139.178.89.65 port 36220 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:50.006913 sshd-session[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:50.021318 systemd-logind[1901]: New session 21 of user core. Jul 6 23:29:50.030366 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:29:50.369106 sshd[6253]: Connection closed by 139.178.89.65 port 36220 Jul 6 23:29:50.368217 sshd-session[6251]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:50.378388 systemd-logind[1901]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:29:50.380961 systemd[1]: sshd@20-172.31.26.116:22-139.178.89.65:36220.service: Deactivated successfully. Jul 6 23:29:50.390811 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:29:50.399275 systemd-logind[1901]: Removed session 21. Jul 6 23:29:55.410674 systemd[1]: Started sshd@21-172.31.26.116:22-139.178.89.65:36222.service - OpenSSH per-connection server daemon (139.178.89.65:36222). Jul 6 23:29:55.625111 sshd[6268]: Accepted publickey for core from 139.178.89.65 port 36222 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:55.627925 sshd-session[6268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:55.637986 systemd-logind[1901]: New session 22 of user core. Jul 6 23:29:55.646517 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:29:55.936673 sshd[6270]: Connection closed by 139.178.89.65 port 36222 Jul 6 23:29:55.937152 sshd-session[6268]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:55.945505 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:29:55.948603 systemd[1]: sshd@21-172.31.26.116:22-139.178.89.65:36222.service: Deactivated successfully. Jul 6 23:29:55.958133 systemd-logind[1901]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:29:55.962278 systemd-logind[1901]: Removed session 22. Jul 6 23:30:00.976686 systemd[1]: Started sshd@22-172.31.26.116:22-139.178.89.65:32902.service - OpenSSH per-connection server daemon (139.178.89.65:32902). Jul 6 23:30:01.189401 sshd[6284]: Accepted publickey for core from 139.178.89.65 port 32902 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:30:01.193183 sshd-session[6284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:01.203139 systemd-logind[1901]: New session 23 of user core. Jul 6 23:30:01.213338 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:30:01.487197 sshd[6286]: Connection closed by 139.178.89.65 port 32902 Jul 6 23:30:01.488085 sshd-session[6284]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:01.497412 systemd-logind[1901]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:30:01.500569 systemd[1]: sshd@22-172.31.26.116:22-139.178.89.65:32902.service: Deactivated successfully. Jul 6 23:30:01.505478 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:30:01.514129 systemd-logind[1901]: Removed session 23. Jul 6 23:30:05.681544 containerd[1930]: time="2025-07-06T23:30:05.681478445Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c818b19960c2055ec2ff127c97d0e0366f176776c3cd4cd722ba2e71f7118ab9\" id:\"78617624d380f40fb8bc45dcf7f98f6bb52f1137d3b8b65eebba36409d9691cf\" pid:6309 exited_at:{seconds:1751844605 nanos:680703557}" Jul 6 23:30:06.153828 containerd[1930]: time="2025-07-06T23:30:06.153752571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a21a5a6592b7c612800886ec77a1937b48864064bd31061f199d002d521137f2\" id:\"bcfaf4d4cb6223fd70122a02e4b2e348f9ffdb38cd84d9dc2731736fbb1f52f0\" pid:6334 exited_at:{seconds:1751844606 nanos:152599359}" Jul 6 23:30:06.530511 systemd[1]: Started sshd@23-172.31.26.116:22-139.178.89.65:32908.service - OpenSSH per-connection server daemon (139.178.89.65:32908). Jul 6 23:30:06.745951 sshd[6344]: Accepted publickey for core from 139.178.89.65 port 32908 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:30:06.749407 sshd-session[6344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:06.761172 systemd-logind[1901]: New session 24 of user core. Jul 6 23:30:06.769424 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:30:07.100531 sshd[6346]: Connection closed by 139.178.89.65 port 32908 Jul 6 23:30:07.101032 sshd-session[6344]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:07.107816 systemd[1]: sshd@23-172.31.26.116:22-139.178.89.65:32908.service: Deactivated successfully. Jul 6 23:30:07.111416 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:30:07.118812 systemd-logind[1901]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:30:07.123915 systemd-logind[1901]: Removed session 24. Jul 6 23:30:07.920938 containerd[1930]: time="2025-07-06T23:30:07.920839100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84c43d4cdf3c129dce7c4e674daab559df356760aa7e8f696a55dd1efebfe90f\" id:\"808d456dfe5cebeec42f583fdfa13802eaf86114278df0498aa7df2320d305b5\" pid:6375 exited_at:{seconds:1751844607 nanos:920252048}" Jul 6 23:30:12.140247 systemd[1]: Started sshd@24-172.31.26.116:22-139.178.89.65:34166.service - OpenSSH per-connection server daemon (139.178.89.65:34166). Jul 6 23:30:12.346570 sshd[6386]: Accepted publickey for core from 139.178.89.65 port 34166 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:30:12.352852 sshd-session[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:12.363926 systemd-logind[1901]: New session 25 of user core. Jul 6 23:30:12.370659 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:30:12.647129 sshd[6388]: Connection closed by 139.178.89.65 port 34166 Jul 6 23:30:12.648435 sshd-session[6386]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:12.659184 systemd-logind[1901]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:30:12.662656 systemd[1]: sshd@24-172.31.26.116:22-139.178.89.65:34166.service: Deactivated successfully. Jul 6 23:30:12.668532 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:30:12.672232 systemd-logind[1901]: Removed session 25. Jul 6 23:30:14.039430 containerd[1930]: time="2025-07-06T23:30:14.039366262Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84c43d4cdf3c129dce7c4e674daab559df356760aa7e8f696a55dd1efebfe90f\" id:\"bae31dc82b38f464786edc519cd871f6ae540b8db853a20bf0b08c21455be03e\" pid:6413 exited_at:{seconds:1751844614 nanos:38710042}" Jul 6 23:30:14.847112 containerd[1930]: time="2025-07-06T23:30:14.847011662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a21a5a6592b7c612800886ec77a1937b48864064bd31061f199d002d521137f2\" id:\"a561b2a51e348629c9b147ccb06986ba7c88061c96ac265c1587708ad2b8c8ae\" pid:6435 exited_at:{seconds:1751844614 nanos:846519518}" Jul 6 23:30:17.690640 systemd[1]: Started sshd@25-172.31.26.116:22-139.178.89.65:34174.service - OpenSSH per-connection server daemon (139.178.89.65:34174). Jul 6 23:30:17.902203 sshd[6445]: Accepted publickey for core from 139.178.89.65 port 34174 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:30:17.905183 sshd-session[6445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:17.917320 systemd-logind[1901]: New session 26 of user core. Jul 6 23:30:17.926888 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:30:18.212078 sshd[6447]: Connection closed by 139.178.89.65 port 34174 Jul 6 23:30:18.212882 sshd-session[6445]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:18.224354 systemd[1]: sshd@25-172.31.26.116:22-139.178.89.65:34174.service: Deactivated successfully. Jul 6 23:30:18.231181 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:30:18.234199 systemd-logind[1901]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:30:18.245249 systemd-logind[1901]: Removed session 26. Jul 6 23:30:31.351869 systemd[1]: cri-containerd-c49a6fa2c53e1599b6b2c6872fbad456ce987a9d06c56a59e6fa98f9e4f77e42.scope: Deactivated successfully. Jul 6 23:30:31.354285 systemd[1]: cri-containerd-c49a6fa2c53e1599b6b2c6872fbad456ce987a9d06c56a59e6fa98f9e4f77e42.scope: Consumed 5.206s CPU time, 63.7M memory peak, 64K read from disk. Jul 6 23:30:31.363621 containerd[1930]: time="2025-07-06T23:30:31.363237472Z" level=info msg="received exit event container_id:\"c49a6fa2c53e1599b6b2c6872fbad456ce987a9d06c56a59e6fa98f9e4f77e42\" id:\"c49a6fa2c53e1599b6b2c6872fbad456ce987a9d06c56a59e6fa98f9e4f77e42\" pid:3130 exit_status:1 exited_at:{seconds:1751844631 nanos:362292472}" Jul 6 23:30:31.368539 containerd[1930]: time="2025-07-06T23:30:31.368354716Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c49a6fa2c53e1599b6b2c6872fbad456ce987a9d06c56a59e6fa98f9e4f77e42\" id:\"c49a6fa2c53e1599b6b2c6872fbad456ce987a9d06c56a59e6fa98f9e4f77e42\" pid:3130 exit_status:1 exited_at:{seconds:1751844631 nanos:362292472}" Jul 6 23:30:31.432671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c49a6fa2c53e1599b6b2c6872fbad456ce987a9d06c56a59e6fa98f9e4f77e42-rootfs.mount: Deactivated successfully. Jul 6 23:30:31.793133 kubelet[3280]: E0706 23:30:31.792217 3280 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-116?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 6 23:30:32.271564 kubelet[3280]: I0706 23:30:32.270606 3280 scope.go:117] "RemoveContainer" containerID="c49a6fa2c53e1599b6b2c6872fbad456ce987a9d06c56a59e6fa98f9e4f77e42" Jul 6 23:30:32.285588 containerd[1930]: time="2025-07-06T23:30:32.285495989Z" level=info msg="CreateContainer within sandbox \"bc9b7aff6644403525d4af2949c53c42c1991878dd7ff2c7ee605742061c56d0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 6 23:30:32.304775 containerd[1930]: time="2025-07-06T23:30:32.304325813Z" level=info msg="Container cdcdafb54e95b41e2d32f32991cc5362b9c3283f767dc3ca67dd4b0cd1b07183: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:30:32.326112 containerd[1930]: time="2025-07-06T23:30:32.326010965Z" level=info msg="CreateContainer within sandbox \"bc9b7aff6644403525d4af2949c53c42c1991878dd7ff2c7ee605742061c56d0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cdcdafb54e95b41e2d32f32991cc5362b9c3283f767dc3ca67dd4b0cd1b07183\"" Jul 6 23:30:32.327514 containerd[1930]: time="2025-07-06T23:30:32.327309641Z" level=info msg="StartContainer for \"cdcdafb54e95b41e2d32f32991cc5362b9c3283f767dc3ca67dd4b0cd1b07183\"" Jul 6 23:30:32.329710 containerd[1930]: time="2025-07-06T23:30:32.329662481Z" level=info msg="connecting to shim cdcdafb54e95b41e2d32f32991cc5362b9c3283f767dc3ca67dd4b0cd1b07183" address="unix:///run/containerd/s/ede3dda257fe7f99f08cb2ce952af5c46ade3a2e2f594a846542d38e959ed7d8" protocol=ttrpc version=3 Jul 6 23:30:32.382348 systemd[1]: Started cri-containerd-cdcdafb54e95b41e2d32f32991cc5362b9c3283f767dc3ca67dd4b0cd1b07183.scope - libcontainer container cdcdafb54e95b41e2d32f32991cc5362b9c3283f767dc3ca67dd4b0cd1b07183. Jul 6 23:30:32.467343 containerd[1930]: time="2025-07-06T23:30:32.467269638Z" level=info msg="StartContainer for \"cdcdafb54e95b41e2d32f32991cc5362b9c3283f767dc3ca67dd4b0cd1b07183\" returns successfully" Jul 6 23:30:32.882432 systemd[1]: cri-containerd-f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393.scope: Deactivated successfully. Jul 6 23:30:32.884240 systemd[1]: cri-containerd-f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393.scope: Consumed 25.450s CPU time, 106.7M memory peak, 416K read from disk. Jul 6 23:30:32.889944 containerd[1930]: time="2025-07-06T23:30:32.889853564Z" level=info msg="received exit event container_id:\"f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393\" id:\"f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393\" pid:3790 exit_status:1 exited_at:{seconds:1751844632 nanos:888910748}" Jul 6 23:30:32.890332 containerd[1930]: time="2025-07-06T23:30:32.890281376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393\" id:\"f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393\" pid:3790 exit_status:1 exited_at:{seconds:1751844632 nanos:888910748}" Jul 6 23:30:32.943788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393-rootfs.mount: Deactivated successfully. Jul 6 23:30:33.284347 kubelet[3280]: I0706 23:30:33.284207 3280 scope.go:117] "RemoveContainer" containerID="f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393" Jul 6 23:30:33.289065 containerd[1930]: time="2025-07-06T23:30:33.288972306Z" level=info msg="CreateContainer within sandbox \"409f40e7ce8bcbb200b4998a332c39eafd79062145ec33cd2d52bc62b3229d9e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 6 23:30:33.312859 containerd[1930]: time="2025-07-06T23:30:33.312781626Z" level=info msg="Container 9f4c823c74e50a866dfe9630cd0d1c0ff25f575c0e9bbb3298fac9b0bd6a4eea: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:30:33.336185 containerd[1930]: time="2025-07-06T23:30:33.336011586Z" level=info msg="CreateContainer within sandbox \"409f40e7ce8bcbb200b4998a332c39eafd79062145ec33cd2d52bc62b3229d9e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9f4c823c74e50a866dfe9630cd0d1c0ff25f575c0e9bbb3298fac9b0bd6a4eea\"" Jul 6 23:30:33.338003 containerd[1930]: time="2025-07-06T23:30:33.336941238Z" level=info msg="StartContainer for \"9f4c823c74e50a866dfe9630cd0d1c0ff25f575c0e9bbb3298fac9b0bd6a4eea\"" Jul 6 23:30:33.339903 containerd[1930]: time="2025-07-06T23:30:33.339841242Z" level=info msg="connecting to shim 9f4c823c74e50a866dfe9630cd0d1c0ff25f575c0e9bbb3298fac9b0bd6a4eea" address="unix:///run/containerd/s/fbd6a8316a983261177e5678de529ea33736f5208acb58167e0588691e27fe75" protocol=ttrpc version=3 Jul 6 23:30:33.396328 systemd[1]: Started cri-containerd-9f4c823c74e50a866dfe9630cd0d1c0ff25f575c0e9bbb3298fac9b0bd6a4eea.scope - libcontainer container 9f4c823c74e50a866dfe9630cd0d1c0ff25f575c0e9bbb3298fac9b0bd6a4eea. Jul 6 23:30:33.471410 containerd[1930]: time="2025-07-06T23:30:33.471324763Z" level=info msg="StartContainer for \"9f4c823c74e50a866dfe9630cd0d1c0ff25f575c0e9bbb3298fac9b0bd6a4eea\" returns successfully" Jul 6 23:30:35.627912 containerd[1930]: time="2025-07-06T23:30:35.627799881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c818b19960c2055ec2ff127c97d0e0366f176776c3cd4cd722ba2e71f7118ab9\" id:\"664c95e6f331b67f5dba74a7d1ad25639903302e77f0922210be77483346a087\" pid:6581 exited_at:{seconds:1751844635 nanos:627444189}" Jul 6 23:30:36.138557 containerd[1930]: time="2025-07-06T23:30:36.138465728Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a21a5a6592b7c612800886ec77a1937b48864064bd31061f199d002d521137f2\" id:\"132d96fb85faf848705a4b1a205587f7bc8230345c4406f3170e3e630b837bce\" pid:6606 exit_status:1 exited_at:{seconds:1751844636 nanos:138099368}" Jul 6 23:30:37.777606 systemd[1]: cri-containerd-44a42560153cf7b3c64ff61106cb443da2e0e335ddf70ffbce5009feffbf91ab.scope: Deactivated successfully. Jul 6 23:30:37.778778 systemd[1]: cri-containerd-44a42560153cf7b3c64ff61106cb443da2e0e335ddf70ffbce5009feffbf91ab.scope: Consumed 2.899s CPU time, 20.8M memory peak. Jul 6 23:30:37.786306 containerd[1930]: time="2025-07-06T23:30:37.786231708Z" level=info msg="received exit event container_id:\"44a42560153cf7b3c64ff61106cb443da2e0e335ddf70ffbce5009feffbf91ab\" id:\"44a42560153cf7b3c64ff61106cb443da2e0e335ddf70ffbce5009feffbf91ab\" pid:3125 exit_status:1 exited_at:{seconds:1751844637 nanos:782803104}" Jul 6 23:30:37.787436 containerd[1930]: time="2025-07-06T23:30:37.787176636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"44a42560153cf7b3c64ff61106cb443da2e0e335ddf70ffbce5009feffbf91ab\" id:\"44a42560153cf7b3c64ff61106cb443da2e0e335ddf70ffbce5009feffbf91ab\" pid:3125 exit_status:1 exited_at:{seconds:1751844637 nanos:782803104}" Jul 6 23:30:37.801107 containerd[1930]: time="2025-07-06T23:30:37.800990232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84c43d4cdf3c129dce7c4e674daab559df356760aa7e8f696a55dd1efebfe90f\" id:\"3d1213a348bc3bfa66d92ef391a4de20613e4d486a420cc746010dcb10d667f3\" pid:6627 exited_at:{seconds:1751844637 nanos:800388084}" Jul 6 23:30:37.835452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44a42560153cf7b3c64ff61106cb443da2e0e335ddf70ffbce5009feffbf91ab-rootfs.mount: Deactivated successfully. Jul 6 23:30:38.308946 kubelet[3280]: I0706 23:30:38.308891 3280 scope.go:117] "RemoveContainer" containerID="44a42560153cf7b3c64ff61106cb443da2e0e335ddf70ffbce5009feffbf91ab" Jul 6 23:30:38.312500 containerd[1930]: time="2025-07-06T23:30:38.312444491Z" level=info msg="CreateContainer within sandbox \"8912a35c5dce7a6c9c1dae6a256641d8ba2dc25a98fa08d27c9cad066146c659\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 6 23:30:38.334083 containerd[1930]: time="2025-07-06T23:30:38.331394531Z" level=info msg="Container 238a0752dc4c6f19b040f47924fbfb877ff874feab0b34acd90f5fab2c5db53e: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:30:38.350616 containerd[1930]: time="2025-07-06T23:30:38.350563763Z" level=info msg="CreateContainer within sandbox \"8912a35c5dce7a6c9c1dae6a256641d8ba2dc25a98fa08d27c9cad066146c659\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"238a0752dc4c6f19b040f47924fbfb877ff874feab0b34acd90f5fab2c5db53e\"" Jul 6 23:30:38.351736 containerd[1930]: time="2025-07-06T23:30:38.351674279Z" level=info msg="StartContainer for \"238a0752dc4c6f19b040f47924fbfb877ff874feab0b34acd90f5fab2c5db53e\"" Jul 6 23:30:38.353709 containerd[1930]: time="2025-07-06T23:30:38.353630327Z" level=info msg="connecting to shim 238a0752dc4c6f19b040f47924fbfb877ff874feab0b34acd90f5fab2c5db53e" address="unix:///run/containerd/s/935034e7bbdbaec568cd18462eee97f8bb618c39c91a2b02a00119b323d94949" protocol=ttrpc version=3 Jul 6 23:30:38.394344 systemd[1]: Started cri-containerd-238a0752dc4c6f19b040f47924fbfb877ff874feab0b34acd90f5fab2c5db53e.scope - libcontainer container 238a0752dc4c6f19b040f47924fbfb877ff874feab0b34acd90f5fab2c5db53e. Jul 6 23:30:38.475904 containerd[1930]: time="2025-07-06T23:30:38.475828055Z" level=info msg="StartContainer for \"238a0752dc4c6f19b040f47924fbfb877ff874feab0b34acd90f5fab2c5db53e\" returns successfully" Jul 6 23:30:41.792940 kubelet[3280]: E0706 23:30:41.792835 3280 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-116?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 6 23:30:44.862777 systemd[1]: cri-containerd-9f4c823c74e50a866dfe9630cd0d1c0ff25f575c0e9bbb3298fac9b0bd6a4eea.scope: Deactivated successfully. Jul 6 23:30:44.864888 containerd[1930]: time="2025-07-06T23:30:44.864288127Z" level=info msg="received exit event container_id:\"9f4c823c74e50a866dfe9630cd0d1c0ff25f575c0e9bbb3298fac9b0bd6a4eea\" id:\"9f4c823c74e50a866dfe9630cd0d1c0ff25f575c0e9bbb3298fac9b0bd6a4eea\" pid:6549 exit_status:1 exited_at:{seconds:1751844644 nanos:863831959}" Jul 6 23:30:44.867072 containerd[1930]: time="2025-07-06T23:30:44.866929771Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f4c823c74e50a866dfe9630cd0d1c0ff25f575c0e9bbb3298fac9b0bd6a4eea\" id:\"9f4c823c74e50a866dfe9630cd0d1c0ff25f575c0e9bbb3298fac9b0bd6a4eea\" pid:6549 exit_status:1 exited_at:{seconds:1751844644 nanos:863831959}" Jul 6 23:30:44.905843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f4c823c74e50a866dfe9630cd0d1c0ff25f575c0e9bbb3298fac9b0bd6a4eea-rootfs.mount: Deactivated successfully. Jul 6 23:30:45.365999 kubelet[3280]: I0706 23:30:45.365967 3280 scope.go:117] "RemoveContainer" containerID="f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393" Jul 6 23:30:45.366970 kubelet[3280]: I0706 23:30:45.366355 3280 scope.go:117] "RemoveContainer" containerID="9f4c823c74e50a866dfe9630cd0d1c0ff25f575c0e9bbb3298fac9b0bd6a4eea" Jul 6 23:30:45.369709 kubelet[3280]: E0706 23:30:45.369202 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-s2bs5_tigera-operator(2abd31d8-a529-4c1f-9e76-1d1a948fac12)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-s2bs5" podUID="2abd31d8-a529-4c1f-9e76-1d1a948fac12" Jul 6 23:30:45.371476 containerd[1930]: time="2025-07-06T23:30:45.371417766Z" level=info msg="RemoveContainer for \"f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393\"" Jul 6 23:30:45.383150 containerd[1930]: time="2025-07-06T23:30:45.383072958Z" level=info msg="RemoveContainer for \"f795717c5b934b2d9c59dd9e5a767c854067759f95e7ebe3cbddaacf46cd4393\" returns successfully" Jul 6 23:30:51.793618 kubelet[3280]: E0706 23:30:51.793549 3280 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-116?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"